From tim.one@home.com  Thu Mar  1 00:02:39 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 28 Feb 2001 19:02:39 -0500
Subject: [Python-Dev] New fatal error in toaiff.py
Message-ID: <LNBBLJKPBEHFEDALKOLCAEOFJCAA.tim.one@home.com>

>python
Python 2.1a2 (#10, Feb 28 2001, 14:06:44) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import toaiff
Fatal Python error: unknown scope for _toaiff in ?(0) in
    c:\code\python\dist\src\lib\toaiff.py

abnormal program termination

>



From ping@lfw.org  Thu Mar  1 00:13:40 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 16:13:40 -0800 (PST)
Subject: [Python-Dev] pydoc for CLI-less platforms
Message-ID: <Pine.LNX.4.10.10102281605370.21681-100000@localhost>

For platforms without a command-line like Windows and Mac,
pydoc will probably be used most often as a web server.
The version in CVS right now runs the server invisibly in
the background.  I just added a little GUI to control it
but i don't have an available Windows platform to test on
right now.  If you happen to have a few minutes to spare
and Windows 9x/NT/2k or a Mac, i would really appreciate
if you could give

    http://www.lfw.org/python/pydoc.py

a quick whirl.  It is intended to be invoked on Windows
platforms eventually as pydoc.pyw, so ignore the DOS box
that appears and let me know if the GUI works and behaves
sensibly for you.  When it's okay, i'll check it in.

Many thanks,


-- ?!ng


Windows and Mac compatibility changes:
    handle both <function foo at 0x827a18> and <function foo at 005D7C80>
    normalize case of paths on sys.path to get rid of duplicates
    change 'localhost' to '127.0.0.1' (Mac likes this better)
    add a tiny GUI for stopping the web server



From ping@lfw.org  Thu Mar  1 00:31:19 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 16:31:19 -0800 (PST)
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <200102282325.SAA31347@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10102281630330.21681-100000@localhost>

On Wed, 28 Feb 2001, Guido van Rossum wrote:
> +1, but first address the comments about test_inspect.py with -O.

Okay, will do (will fix test_inspect, won't change UnboundLocalError).


-- ?!ng



From pedroni@inf.ethz.ch  Thu Mar  1 00:57:45 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 01:57:45 +0100
Subject: [Python-Dev] nested scopes. global: have I got it right?
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>

Hi. Is the following true?

PEP227 states:
"""
If the global statement occurs within a block, all uses of the
name specified in the statement refer to the binding of that name
in the top-level namespace.
"""

but this is a bit ambiguous, because the global decl (I imagine for
backw-compatibility)
does not affect the code blocks of nested (func) definitions. So

x=7
def f():
  global x
  def g():
    exec "x=3"
    return x
  print g()

f()

prints 3, not 7.


PS: this improve backw-compatibility but the PEP is ambiguous or block concept
does
not imply nested definitions(?). This affects only special cases but it is
quite strange in presence
of nested scopes, having decl that do not extend to inner scopes.



From guido@digicool.com  Thu Mar  1 01:08:32 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 20:08:32 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 01:57:45 +0100."
 <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
 <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
Message-ID: <200103010108.UAA00516@cj20424-a.reston1.va.home.com>

> Hi. Is the following true?
> 
> PEP227 states:
> """
> If the global statement occurs within a block, all uses of the
> name specified in the statement refer to the binding of that name
> in the top-level namespace.
> """
> 
> but this is a bit ambiguous, because the global decl (I imagine for
> backw-compatibility)
> does not affect the code blocks of nested (func) definitions. So
> 
> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
> 
> f()
> 
> prints 3, not 7.

Unclear whether this should change.  The old rule can also be read as
"you have to repeat 'global' for a variable in each scope where you
intend to assign to it".

> PS: this improve backw-compatibility but the PEP is ambiguous or
> block concept does not imply nested definitions(?). This affects
> only special cases but it is quite strange in presence of nested
> scopes, having decl that do not extend to inner scopes.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From pedroni@inf.ethz.ch  Thu Mar  1 01:24:53 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 02:24:53 +0100
Subject: [Python-Dev] nested scopes. global: have I got it right?
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>             <000d01c0a1ea$a1d53e60$f55821c0@newmexico>  <200103010108.UAA00516@cj20424-a.reston1.va.home.com>
Message-ID: <005301c0a1ee$6c30cdc0$f55821c0@newmexico>

I didn't want to start a discussion, I was more concerned if I got the semantic
(that I should impl) right.
So:
  x=7
  def f():
     x=1
     def g():
       global x
       def h(): return x
       return h()
     return g()

will print 1. Ok.

regards.

PS: I tried this with a2 and python just died, I imagine, this has been fixed.




From guido@digicool.com  Thu Mar  1 01:42:49 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 20:42:49 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 02:24:53 +0100."
 <005301c0a1ee$6c30cdc0$f55821c0@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <200103010108.UAA00516@cj20424-a.reston1.va.home.com>
 <005301c0a1ee$6c30cdc0$f55821c0@newmexico>
Message-ID: <200103010142.UAA00686@cj20424-a.reston1.va.home.com>

> I didn't want to start a discussion, I was more concerned if I got
> the semantic (that I should impl) right.
> So:
>   x=7
>   def f():
>      x=1
>      def g():
>        global x
>        def h(): return x
>        return h()
>      return g()

and then print f() as main, right?

> will print 1. Ok.
> 
> regards.

Argh!  I honestly don't know what this ought to do.  Under the rules
as I currently think of them this would print 1.  But that's at least
surprising, so maybe we'll have to revisit this.

Jeremy, also please note that if I add "from __future__ import
nested_scopes" to the top, this dumps core, saying: 

    lookup 'x' in g 2 -1
    Fatal Python error: com_make_closure()
    Aborted (core dumped)

Maybe you can turn this into a regular error? <0.5 wink>

> PS: I tried this with a2 and python just died, I imagine, this has
> been fixed.

Seems so. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From tim.one@home.com  Thu Mar  1 02:11:25 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 28 Feb 2001 21:11:25 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEOMJCAA.tim.one@home.com>

[Samuele Pedroni]
> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
>
> f()
>
> prints 3, not 7.

Note the the Ref Man (section on the global stmt) adds some more wrinkles:

    ...
    global is a directive to the parser.  It applies only to code
    parsed at the same time as the global statement.  In particular,
    a global statement contained in an exec statement does not
    affect the code block containing the exec statement, and code
    contained in an exec statement is unaffected by global statements
    in the code containing the exec statement.  The same applies to the
    eval(), execfile() and compile() functions.

>From that we deduce that the x in "x=3" can't refer to the global x no matter
what.  Therefore it must refer to a local x.  Therefore the x in "return x"
must also refer to a local x, lest the single name x refer to two distinct
vrbls in the body of g.

This was mind-twisting even before nested scopes, though:

>>> x = 666
>>> def f():
...     global x
...     exec("x=3")  # still doesn't "see" the global above it
...     print x
...
>>> f()
666
>>>

So to what did the x in "x=3" refer *there*?  Must have been different than
the x in "print x"!

Mixing global, import*, and exec has always been a rich source of surprises.

Again with a twist:

>>> x = 666
>>> def f():
...     global x
...     exec "global x\nx = 3\n"
...     print x
...
>>> f()
<string>:0: SyntaxWarning: global statement has no meaning at module level
3
>>>

Now it's consistent.  But the warning is pretty mysterious!  The 'global' in
the string passed to exec has a crucial meaning.



From Jason.Tishler@dothill.com  Thu Mar  1 02:44:47 2001
From: Jason.Tishler@dothill.com (Jason Tishler)
Date: Wed, 28 Feb 2001 21:44:47 -0500
Subject: [Python-Dev] Re: Case-sensitive import
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com>; from tim.one@home.com on Wed, Feb 28, 2001 at 05:21:02PM -0500
References: <20010228151728.Q449@dothill.com> <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com>
Message-ID: <20010228214447.I252@dothill.com>

Tim,

On Wed, Feb 28, 2001 at 05:21:02PM -0500, Tim Peters wrote:
> And thank you for your Cygwin work --

Your welcome -- I appreciate the willingness of the core Python team to
consider Cygwin related patches.

> someday I hope to use Cygwin for more
> than just running "patch" on this box <sigh> ...

Be careful!  First, you may use grep occasionally.  Next, you may find
yourself writing shell scripts.  Before you know it, you have crossed
over to the Unix side.  You have been warned! :,)

Thanks,
Jason

-- 
Jason Tishler
Director, Software Engineering       Phone: +1 (732) 264-8770 x235
Dot Hill Systems Corp.               Fax:   +1 (732) 264-8798
82 Bethany Road, Suite 7             Email: Jason.Tishler@dothill.com
Hazlet, NJ 07730 USA                 WWW:   http://www.dothill.com


From greg@cosc.canterbury.ac.nz  Thu Mar  1 02:58:06 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 01 Mar 2001 15:58:06 +1300 (NZDT)
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEOMJCAA.tim.one@home.com>
Message-ID: <200103010258.PAA02214@s454.cosc.canterbury.ac.nz>

Quoth the Samuele Pedroni:

> In particular,
> a global statement contained in an exec statement does not
> affect the code block containing the exec statement, and code
> contained in an exec statement is unaffected by global statements
> in the code containing the exec statement.

I think this is broken. As long as we're going to allow
exec-with-1-arg to implicitly mess with the current namespace,
names in the exec'ed statement should have the same meanings
as they do in the surrounding statically-compiled code.

So, global statements in the surrounding scope should be honoured
in the exec'ed statement, and global statements should be disallowed
within the exec'ed statement.

Better still, get rid of both exec-with-1-arg and locals()
altogether...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+



From fdrake@users.sourceforge.net  Thu Mar  1 05:20:23 2001
From: fdrake@users.sourceforge.net (Fred L. Drake)
Date: Wed, 28 Feb 2001 21:20:23 -0800
Subject: [Python-Dev] [development doc updates]
Message-ID: <E14YLVn-0003XL-00@usw-pr-shell2.sourceforge.net>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/




From jeremy@alum.mit.edu  Thu Mar  1 05:49:33 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 00:49:33 -0500 (EST)
Subject: [Python-Dev] code objects leakin'
Message-ID: <15005.58093.314004.571576@w221.z064000254.bwi-md.dsl.cnc.net>

It looks like code objects are leaked with surprising frequency.  I
added a simple counter that records all code object allocs and
deallocs.  For many programs, the net is zero.  For some, including
setup.py and the regression test, it's much larger than zero.

I've got no time to look at this before the beta, but perhaps someone
else does.  Even if it can't be fixed, it would be helpful to know
what's going wrong.

I am fairly certain that recursive functions are being leaked, even
after patching function object's traverse function to visit the
func_closure.

Jeremy


From jeremy@alum.mit.edu  Thu Mar  1 06:00:25 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 01:00:25 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects funcobject.c,2.35,2.36
In-Reply-To: <E14YMEZ-0006od-00@usw-pr-cvs1.sourceforge.net>
References: <E14YMEZ-0006od-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <15005.58745.306448.535530@w221.z064000254.bwi-md.dsl.cnc.net>

This change does not appear to solve the leaks, but it seems
necessary for correctness.

Jeremy


From martin@loewis.home.cs.tu-berlin.de  Thu Mar  1 06:16:59 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 1 Mar 2001 07:16:59 +0100
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
Message-ID: <200103010616.f216Gx301229@mira.informatik.hu-berlin.de>

> but where's the patch?

Argh. It's now at http://www.informatik.hu-berlin.de/~loewis/python/directive.diff

> other tools that parse Python will have to be adapted.

Yes, that's indeed a problem. Initially, that syntax will be used only
to denote modules that use nested scopes, so those tools would have
time to adjust.

> The __future__ hack doesn't need that.

If it is *just* parsing, then yes. If it does any further analysis
(e.g. "find definition (of a variable)" aka "find assignments to"), or
if they inspect code objects, these tools again need to be adopted.

Regards,
Martin



From thomas@xs4all.net  Thu Mar  1 07:29:09 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 08:29:09 +0100
Subject: [Python-Dev] Re: Case-sensitive import
In-Reply-To: <20010228214447.I252@dothill.com>; from Jason.Tishler@dothill.com on Wed, Feb 28, 2001 at 09:44:47PM -0500
References: <20010228151728.Q449@dothill.com> <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com> <20010228214447.I252@dothill.com>
Message-ID: <20010301082908.I9678@xs4all.nl>

On Wed, Feb 28, 2001 at 09:44:47PM -0500, Jason Tishler wrote:

[ Tim Peters ]
> > someday I hope to use Cygwin for more
> > than just running "patch" on this box <sigh> ...

> Be careful!  First, you may use grep occasionally.  Next, you may find
> yourself writing shell scripts.  Before you know it, you have crossed
> over to the Unix side.  You have been warned! :,)

Well, Tim used to be a true Jedi Knight, but was won over by the dark side.
His name keeps popping up in decidedly unixlike tools, like Emacs' 'python'
mode. It is certain that his defection brought balance to the force (or at
least to Python) but we'd still like to rescue him before he is forced to
sacrifice himself to save Python. ;)

Lets-just-call-him-anatim-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fredrik@pythonware.com  Thu Mar  1 11:57:08 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Thu, 1 Mar 2001 12:57:08 +0100
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
References: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de>  <200102282248.RAA31007@cj20424-a.reston1.va.home.com>
Message-ID: <02c901c0a246$bef128e0$0900a8c0@SPIFF>

Guido wrote:
> There's one downside to the "directive" syntax: other tools that parse
> Python will have to be adapted.  The __future__ hack doesn't need
> that.

also:

- "from __future__" gives a clear indication that you're using
  a non-standard feature.  "directive" is too generic.

- everyone knows how to mentally parse from-import state-
  ments, and that they may have side effects.  nobody knows
  what "directive" does.

- pragmas suck.  we need much more discussion (and calender
  time) before adding a pragma-like directive to Python.

- "from __future__" makes me smile.  "directive" doesn't.

-1, for now.

Cheers /F



From guido@digicool.com  Thu Mar  1 14:29:10 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 09:29:10 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 15:58:06 +1300."
 <200103010258.PAA02214@s454.cosc.canterbury.ac.nz>
References: <200103010258.PAA02214@s454.cosc.canterbury.ac.nz>
Message-ID: <200103011429.JAA03471@cj20424-a.reston1.va.home.com>

> Quoth the Samuele Pedroni:
> 
> > In particular,
> > a global statement contained in an exec statement does not
> > affect the code block containing the exec statement, and code
> > contained in an exec statement is unaffected by global statements
> > in the code containing the exec statement.
> 
> I think this is broken. As long as we're going to allow
> exec-with-1-arg to implicitly mess with the current namespace,
> names in the exec'ed statement should have the same meanings
> as they do in the surrounding statically-compiled code.
> 
> So, global statements in the surrounding scope should be honoured
> in the exec'ed statement, and global statements should be disallowed
> within the exec'ed statement.
> 
> Better still, get rid of both exec-with-1-arg and locals()
> altogether...

That's my plan, so I suppose we should not bother to "fix" the broken
behavior that has been around from the start.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Thu Mar  1 14:55:01 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 09:55:01 -0500
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
In-Reply-To: Your message of "Thu, 01 Mar 2001 07:16:59 +0100."
 <200103010616.f216Gx301229@mira.informatik.hu-berlin.de>
References: <200103010616.f216Gx301229@mira.informatik.hu-berlin.de>
Message-ID: <200103011455.JAA04064@cj20424-a.reston1.va.home.com>

> Argh. It's now at http://www.informatik.hu-berlin.de/~loewis/python/directive.diff
> 
> > other tools that parse Python will have to be adapted.
> 
> Yes, that's indeed a problem. Initially, that syntax will be used only
> to denote modules that use nested scopes, so those tools would have
> time to adjust.
> 
> > The __future__ hack doesn't need that.
> 
> If it is *just* parsing, then yes. If it does any further analysis
> (e.g. "find definition (of a variable)" aka "find assignments to"), or
> if they inspect code objects, these tools again need to be adopted.

This is just too late for the first beta.  But we'll consider it for
beta 2!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From Samuele Pedroni <pedroni@inf.ethz.ch>  Thu Mar  1 15:33:14 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Thu, 1 Mar 2001 16:33:14 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011533.QAA06035@core.inf.ethz.ch>

Hi.

I read the following CVS log from Jeremy:

> Fix core dump in example from Samuele Pedroni:
> 
> from __future__ import nested_scopes
> x=7
> def f():
>     x=1
>     def g():
>         global x
>         def i():
>             def h():
>                 return x
>             return h()
>         return i()
>     return g()
> 
> print f()
> print x
> 
> This kind of code didn't work correctly because x was treated as free
> in i, leading to an attempt to load x in g to make a closure for i.
> 
> Solution is to make global decl apply to nested scopes unless their is
> an assignment.  Thus, x in h is global.
> 

Will that be the intended final semantic?

The more backw-compatible semantic would be for that code to print:
1
7
(I think this was the semantic Guido, was thinking of)

Now, if I have understood well, this prints
7
7

but if I put a x=666 in h this prints:
666
7

but the most natural (just IMHO) nesting semantic would be in that case to
print:
666
666
(so x is considered global despite the assignement, because decl extends to
enclosed scopes too).

I have no preference but I'm confused. Samuele Pedroni.



From guido@digicool.com  Thu Mar  1 15:42:55 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 10:42:55 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: Your message of "Thu, 01 Mar 2001 05:56:42 PST."
 <E14YTZS-0003kB-00@usw-pr-cvs1.sourceforge.net>
References: <E14YTZS-0003kB-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <200103011542.KAA04518@cj20424-a.reston1.va.home.com>

Ping just checked in this:

> Log Message:
> Add __author__ and __credits__ variables.
> 
> 
> Index: tokenize.py
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Lib/tokenize.py,v
> retrieving revision 1.19
> retrieving revision 1.20
> diff -C2 -r1.19 -r1.20
> *** tokenize.py	2001/03/01 04:27:19	1.19
> --- tokenize.py	2001/03/01 13:56:40	1.20
> ***************
> *** 10,14 ****
>   it produces COMMENT tokens for comments and gives type OP for all operators."""
>   
> ! __version__ = "Ka-Ping Yee, 26 October 1997; patched, GvR 3/30/98"
>   
>   import string, re
> --- 10,15 ----
>   it produces COMMENT tokens for comments and gives type OP for all operators."""
>   
> ! __author__ = 'Ka-Ping Yee <ping@lfw.org>'
> ! __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'
>   
>   import string, re

I'm slightly uncomfortable with the __credits__ variable inserted
here.  First of all, __credits__ doesn't really describe the
information given.  Second, doesn't this info belong in the CVS
history?  I'm not for including random extracts of a module's history
in the source code -- this is more likely than not to become out of
date.  (E.g. from the CVS log it's not clear why my contribution
deserves a mention while Tim's doesn't -- it looks like Tim probably
spent a lot more time thinking about it than I did.)

Anothor source of discomfort is that there's absolutely no standard
for this kind of meta-data variables.  We've got __version__, and I
believe we once agreed on that (in 1994 or so :-).  But __author__?
__credits__?  What next -- __cute_signoff__?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jeremy@alum.mit.edu  Thu Mar  1 16:10:28 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:10:28 -0500 (EST)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <200103011533.QAA06035@core.inf.ethz.ch>
References: <200103011533.QAA06035@core.inf.ethz.ch>
Message-ID: <15006.29812.95600.22223@w221.z064000254.bwi-md.dsl.cnc.net>

I'm not convinced there is a natural meaning for this, nor am I
certain that was is now implemented is the least unnatural meaning.

    from __future__ import nested_scopes
    x=7
    def f():
        x=1
        def g():
            global x
            def i():
                def h():
                    return x
                return h()
            return i()
        return g()
    
    print f()
    print x

prints:
    7
    7

I think the chief question is what 'global x' means without any other
reference to x in the same code block.  The other issue is whether a
global statement is a name binding operation of sorts.

If we had
        def g():
	    x = 2            # instead of global
            def i():
                def h():
                    return x
                return h()
            return i()

It is clear that x in h uses the binding introduced in g.

        def g():
            global x
	    x = 2
            def i():
                def h():
                    return x
                return h()
            return i()

Now that x is declared global, should the binding for x in g be
visible in h?  I think it should, because the alternative would be
more confusing.

    def f():
        x = 3
        def g():
            global x
	    x = 2
            def i():
                def h():
                    return x
                return h()
            return i()

If global x meant that the binding for x wasn't visible in nested
scopes, then h would use the binding for x introduced in f.  This is
confusing, because visual inspection shows that the nearest block with
an assignment to x is g.  (You might overlook the global x statement.)

The rule currently implemented is to use the binding introduced in the
nearest enclosing scope.  If the binding happens to be between the
name and the global namespace, that is the binding that is used.

Samuele noted that he thinks the most natural semantics would be for
global to extend into nested scopes.  I think this would be confusing
-- or at least I'm confused <wink>.  

        def g():
            global x
	    x = 2
            def i():
                def h():
                    x = 10
                    return x
                return h()
            return i()

In this case, he suggests that the assignment in h should affect the
global x.  I think this is incorrect because enclosing scopes should
only have an effect when variables are free.  By the normal Python
rules, x is not free in h because there is an assignment to x; x is
just a local.

Jeremy


From ping@lfw.org  Thu Mar  1 16:13:56 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 08:13:56 -0800 (PST)
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <200103011542.KAA04518@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org>

On Thu, 1 Mar 2001, Guido van Rossum wrote:
> I'm slightly uncomfortable with the __credits__ variable inserted
> here.  First of all, __credits__ doesn't really describe the
> information given.

I'll explain the motivation here.  I was going to write something
about this when i got up in the morning, but you've noticed before
i got around to it (and i haven't gone to sleep yet).

    - The __version__ variable really wasn't a useful place for
      this information.  The version of something really isn't
      the same as the author or the date it was created; it should
      be either a revision number from an RCS tag or a number
      updated periodically by the maintainer.  By separating out
      other kinds of information, we allow __version__ to retain
      its focused purpose.

    - The __author__ tag is a pretty standard piece of metadata
      among most kinds of documentation -- there are AUTHOR
      sections in almost all man pages, and similar "creator"
      information in just about every metadata standard for
      documents or work products of any kind.  Contact info and
      copyright info can go here.  This is important because it
      identifies a responsible party -- someone to ask questions
      of, and to send complaints, thanks, and patches to.  Maybe
      one day we can use it to help automate the process of
      assigning patches and directing feedback.

    - The __credits__ tag is a way of acknowledging others who
      contributed to the product.  It can be used to recount a
      little history, but the real motivation for including it
      is social engineering: i wanted to foster a stronger mutual
      gratification culture around Python by giving people a place
      to be generous with their acknowledgements.  It's always
      good to err on the side of generosity rather than stinginess
      when giving praise.  Open source is fueled in large part by
      egoboo, and if we can let everyone participate, peer-to-peer
      style rather than centralized, in patting others on the back,
      then all the better.  People do this in # comments anyway;
      the only difference now is that their notes are visible to pydoc.

> Second, doesn't this info belong in the CVS history?

__credits__ isn't supposed to be a change log; it's a reward
mechanism.  Or consider it ego-Napster, if you prefer.

Share the love. :)

> Anothor source of discomfort is that there's absolutely no standard
> for this kind of meta-data variables.

I think the behaviour of processing tools such as pydoc will
create a de-facto standard.  I was careful to respect __version__
in the ways that it is currently used, and i am humbly offering
these others in the hope that you will see why they are worth
having, too.



-- ?!ng

"If cryptography is outlawed, only QJVKN YFDLA ZBYCG HFUEG UFRYG..."



From guido@digicool.com  Thu Mar  1 16:30:53 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 11:30:53 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: Your message of "Thu, 01 Mar 2001 08:13:56 PST."
 <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org>
References: <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org>
Message-ID: <200103011630.LAA04973@cj20424-a.reston1.va.home.com>

> On Thu, 1 Mar 2001, Guido van Rossum wrote:
> > I'm slightly uncomfortable with the __credits__ variable inserted
> > here.  First of all, __credits__ doesn't really describe the
> > information given.

Ping replied:
> I'll explain the motivation here.  I was going to write something
> about this when i got up in the morning, but you've noticed before
> i got around to it (and i haven't gone to sleep yet).
> 
>     - The __version__ variable really wasn't a useful place for
>       this information.  The version of something really isn't
>       the same as the author or the date it was created; it should
>       be either a revision number from an RCS tag or a number
>       updated periodically by the maintainer.  By separating out
>       other kinds of information, we allow __version__ to retain
>       its focused purpose.

Sure.

>     - The __author__ tag is a pretty standard piece of metadata
>       among most kinds of documentation -- there are AUTHOR
>       sections in almost all man pages, and similar "creator"
>       information in just about every metadata standard for
>       documents or work products of any kind.  Contact info and
>       copyright info can go here.  This is important because it
>       identifies a responsible party -- someone to ask questions
>       of, and to send complaints, thanks, and patches to.  Maybe
>       one day we can use it to help automate the process of
>       assigning patches and directing feedback.

No problem here.

>     - The __credits__ tag is a way of acknowledging others who
>       contributed to the product.  It can be used to recount a
>       little history, but the real motivation for including it
>       is social engineering: i wanted to foster a stronger mutual
>       gratification culture around Python by giving people a place
>       to be generous with their acknowledgements.  It's always
>       good to err on the side of generosity rather than stinginess
>       when giving praise.  Open source is fueled in large part by
>       egoboo, and if we can let everyone participate, peer-to-peer
>       style rather than centralized, in patting others on the back,
>       then all the better.  People do this in # comments anyway;
>       the only difference now is that their notes are visible to pydoc.

OK.  Then I think you goofed up in the __credits__ you actually
checked in for tokenize.py:

    __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'

I would have expected something like this:

    __credits__ = 'contributions: GvR, ESR, Tim Peters, Thomas Wouters, ' \
                  'Fred Drake, Skip Montanaro'

> > Second, doesn't this info belong in the CVS history?
> 
> __credits__ isn't supposed to be a change log; it's a reward
> mechanism.  Or consider it ego-Napster, if you prefer.
> 
> Share the love. :)

You west coasters. :-)

> > Anothor source of discomfort is that there's absolutely no standard
> > for this kind of meta-data variables.
> 
> I think the behaviour of processing tools such as pydoc will
> create a de-facto standard.  I was careful to respect __version__
> in the ways that it is currently used, and i am humbly offering
> these others in the hope that you will see why they are worth
> having, too.

What does pydoc do with __credits__?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jeremy@alum.mit.edu  Thu Mar  1 16:37:53 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:37:53 -0500 (EST)
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>
References: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>
Message-ID: <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "RT" == Robin Thomas <robin.thomas@starmedia.net> writes:

  RT> Using Python 2.0 on Win32. Am I the only person to be depressed
  RT> by the following behavior now that __getitem__ does the work of
  RT> __getslice__?

You may the only person to have tried it :-).

  RT> Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
  >>> d = {}
  >>> d[0:1] = 1
  >>> d
  {slice(0, 1, None): 1}

I think this should raise a TypeError (as you suggested later).

>>> del d[0:1]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: object doesn't support slice deletion

Jeremy


From Samuele Pedroni <pedroni@inf.ethz.ch>  Thu Mar  1 16:53:43 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Thu, 1 Mar 2001 17:53:43 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011653.RAA09025@core.inf.ethz.ch>

Hi.

Your rationale sounds ok.
We are just facing the oddities of the python rule - that assignment
indetifies locals - when extended to nested scopes new world.
(Everybody will be confused his own way ;), better write non confusing
code ;))
I think I should really learn to read code this way, and also
everybody coming from languages with explicit declarations:

is the semantic (expressed through bytecode instrs) right?

(I)
    from __future__ import nested_scopes
    x=7
    def f():
        #pseudo-local-decl x
        x=1
        def g():
            global x # global-decl x
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
        return g()
    
    print f()
    print x

(II)
        def g():
            #pseudo-local-decl x
	    x = 2            # instead of global
            def i():
                def h():
                    return x # => LOAD_DEREF (x from g)
                return h()
            return i()

(III)
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
(IV)           
    def f():
        # pseudo-local-decl x
        x = 3 # => STORE_FAST
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
(IV)
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    # pseudo-local-decl x
                    x = 10   # => STORE_FAST
                    return x # => LOAD_FAST
                return h()
            return i()
If one reads also here the implicit local-decl, this is fine, otherwise this 
is confusing. It's a matter whether 'global' kills the local-decl only in one
scope or in the nesting too. I have no preference.


regards, Samuele Pedroni.



From jeremy@alum.mit.edu  Thu Mar  1 16:57:20 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:57:20 -0500 (EST)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <200103011653.RAA09025@core.inf.ethz.ch>
References: <200103011653.RAA09025@core.inf.ethz.ch>
Message-ID: <15006.32624.826559.907667@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "SP" == Samuele Pedroni <pedroni@inf.ethz.ch> writes:

  SP> If one reads also here the implicit local-decl, this is fine,
  SP> otherwise this is confusing. It's a matter whether 'global'
  SP> kills the local-decl only in one scope or in the nesting too. I
  SP> have no preference.

All your examples look like what is currently implemented.  My
preference is that global kills the local-decl only in one scope.
I'll stick with that unless Guido disagrees.

Jeremy


From Samuele Pedroni <pedroni@inf.ethz.ch>  Thu Mar  1 17:04:56 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Thu, 1 Mar 2001 18:04:56 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011704.SAA09425@core.inf.ethz.ch>

[Jeremy] 
> All your examples look like what is currently implemented.  My
> preference is that global kills the local-decl only in one scope.
> I'll stick with that unless Guido disagrees.
At least this will break fewer code.

regards.



From ping@lfw.org  Thu Mar  1 17:11:28 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 09:11:28 -0800 (PST)
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <200103011630.LAA04973@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10103010909520.862-100000@skuld.kingmanhall.org>

On Thu, 1 Mar 2001, Guido van Rossum wrote:
> OK.  Then I think you goofed up in the __credits__ you actually
> checked in for tokenize.py:
> 
>     __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'

Indeed, that was mindless copying.

> I would have expected something like this:
> 
>     __credits__ = 'contributions: GvR, ESR, Tim Peters, Thomas Wouters, ' \
>                   'Fred Drake, Skip Montanaro'

Sure.  Done.

> You west coasters. :-)

You forget that i'm a Canadian prairie boy at heart. :)

> What does pydoc do with __credits__?

They show up in a little section at the end of the document.


-- ?!ng

"If cryptography is outlawed, only QJVKN YFDLA ZBYCG HFUEG UFRYG..."



From esr@thyrsus.com  Thu Mar  1 17:47:51 2001
From: esr@thyrsus.com (Eric S. Raymond)
Date: Thu, 1 Mar 2001 12:47:51 -0500
Subject: [Python-Dev] Finger error -- my apologies
Message-ID: <20010301124751.B24835@thyrsus.com>

--9jxsPFA5p3P2qPhR
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

I meant to accept this patch, but I think I rejected it instead.
Sorry, Ping.  Resubmit, plese, if I fooed up?
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

It is the assumption of this book that a work of art is a gift, not a
commodity.  Or, to state the modern case with more precision, that works of
art exist simultaneously in two "economies," a market economy and a gift
economy.  Only one of these is essential, however: a work of art can survive
without the market, but where there is no gift there is no art.
	-- Lewis Hyde, The Gift: Imagination and the Erotic Life of Property

--9jxsPFA5p3P2qPhR
Content-Type: message/rfc822
Content-Disposition: inline

Return-Path: <nobody@sourceforge.net>
Received: from localhost (IDENT:esr@localhost [127.0.0.1])
	by snark.thyrsus.com (8.11.0/8.11.0) with ESMTP id f21EA5J23636
	for <esr@localhost>; Thu, 1 Mar 2001 09:10:05 -0500
Envelope-to: esr@thyrsus.com
Delivery-date: Wed, 28 Feb 2001 23:03:51 -0800
Received: from hurkle.thyrsus.com [198.186.203.83]
	by localhost with IMAP (fetchmail-5.6.9)
	for esr@localhost (multi-drop); Thu, 01 Mar 2001 09:10:05 -0500 (EST)
Received: from usw-sf-sshgate.sourceforge.net ([216.136.171.253] helo=usw-sf-netmisc.sourceforge.net)
	by hurkle.thyrsus.com with esmtp (Exim 3.22 #1 (Debian))
	id 14YN7v-0002yQ-00
	for <esr@thyrsus.com>; Wed, 28 Feb 2001 23:03:51 -0800
Received: from usw-sf-web2-b.sourceforge.net
	([10.3.1.6] helo=usw-sf-web2.sourceforge.net ident=mail)
	by usw-sf-netmisc.sourceforge.net with esmtp (Exim 3.22 #1 (Debian))
	id 14YTfu-0005SA-00; Thu, 01 Mar 2001 06:03:22 -0800
Received: from nobody by usw-sf-web2.sourceforge.net with local (Exim 3.22 #1 (Debian))
	id 14YTgQ-0008FG-00; Thu, 01 Mar 2001 06:03:54 -0800
To: noreply@sourceforge.net
Subject: [ python-Patches-405122 ] webbrowser fix
Message-Id: <E14YTgQ-0008FG-00@usw-sf-web2.sourceforge.net>
From: nobody <nobody@sourceforge.net>
Date: Thu, 01 Mar 2001 06:03:54 -0800
X-SpamBouncer: 1.3 (1/18/01)
X-SBNote: From Admin
X-SBClass: Admin

Patches #405122, was updated on 2001-03-01 06:03
You can respond by visiting: 
http://sourceforge.net/tracker/?func=detail&atid=305470&aid=405122&group_id=5470

Category: library
Group: None
Status: Open
Priority: 5
Submitted By: Ka-Ping Yee
Assigned to: Eric S. Raymond
Summary: webbrowser fix

Initial Comment:
Put the word "Web" in the synopsis line.
Remove the -no-about-splash option, as it prevents this
module from working with Netscape 3.

----------------------------------------------------------------------

You can respond by visiting: 
http://sourceforge.net/tracker/?func=detail&atid=305470&aid=405122&group_id=5470

--9jxsPFA5p3P2qPhR--


From jeremy@alum.mit.edu  Thu Mar  1 18:16:03 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 13:16:03 -0500 (EST)
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
Message-ID: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>

from __future__ import nested_scopes is accepted at the interactive
interpreter prompt but has no effect beyond the line on which it was
entered.  You could use it with lambdas entered following a
semicolon, I guess.

I would rather see the future statement take effect for the remained
of the interactive interpreter session.  I have included a first-cut
patch below that makes this possible, using an object called
PySessionState.  (I don't like the name, but don't have a better one;
PyCompilerFlags?)

The idea of the session state is to record information about the state
of an interactive session that may affect compilation.  The
state object is created in PyRun_InteractiveLoop() and passed all the
way through to PyNode_Compile().

Does this seem a reasonable approach?  Should I include it in the
beta?  Any name suggestions.

Jeremy


Index: Include/compile.h
===================================================================
RCS file: /cvsroot/python/python/dist/src/Include/compile.h,v
retrieving revision 2.27
diff -c -r2.27 compile.h
*** Include/compile.h	2001/02/28 01:58:08	2.27
--- Include/compile.h	2001/03/01 18:18:27
***************
*** 41,47 ****
  
  /* Public interface */
  struct _node; /* Declare the existence of this type */
! DL_IMPORT(PyCodeObject *) PyNode_Compile(struct _node *, char *);
  DL_IMPORT(PyCodeObject *) PyCode_New(
  	int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *,
  	PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *); 
--- 41,48 ----
  
  /* Public interface */
  struct _node; /* Declare the existence of this type */
! DL_IMPORT(PyCodeObject *) PyNode_Compile(struct _node *, char *,
! 					 PySessionState *);
  DL_IMPORT(PyCodeObject *) PyCode_New(
  	int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *,
  	PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *); 
Index: Include/pythonrun.h
===================================================================
RCS file: /cvsroot/python/python/dist/src/Include/pythonrun.h,v
retrieving revision 2.38
diff -c -r2.38 pythonrun.h
*** Include/pythonrun.h	2001/02/02 18:19:15	2.38
--- Include/pythonrun.h	2001/03/01 18:18:27
***************
*** 7,12 ****
--- 7,16 ----
  extern "C" {
  #endif
  
+ typedef struct {
+ 	int ss_nested_scopes;
+ } PySessionState;
+ 
  DL_IMPORT(void) Py_SetProgramName(char *);
  DL_IMPORT(char *) Py_GetProgramName(void);
  
***************
*** 25,31 ****
  DL_IMPORT(int) PyRun_SimpleString(char *);
  DL_IMPORT(int) PyRun_SimpleFile(FILE *, char *);
  DL_IMPORT(int) PyRun_SimpleFileEx(FILE *, char *, int);
! DL_IMPORT(int) PyRun_InteractiveOne(FILE *, char *);
  DL_IMPORT(int) PyRun_InteractiveLoop(FILE *, char *);
  
  DL_IMPORT(struct _node *) PyParser_SimpleParseString(char *, int);
--- 29,35 ----
  DL_IMPORT(int) PyRun_SimpleString(char *);
  DL_IMPORT(int) PyRun_SimpleFile(FILE *, char *);
  DL_IMPORT(int) PyRun_SimpleFileEx(FILE *, char *, int);
! DL_IMPORT(int) PyRun_InteractiveOne(FILE *, char *, PySessionState *);
  DL_IMPORT(int) PyRun_InteractiveLoop(FILE *, char *);
  
  DL_IMPORT(struct _node *) PyParser_SimpleParseString(char *, int);
Index: Python/compile.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/compile.c,v
retrieving revision 2.184
diff -c -r2.184 compile.c
*** Python/compile.c	2001/03/01 06:09:34	2.184
--- Python/compile.c	2001/03/01 18:18:28
***************
*** 471,477 ****
  static void com_assign(struct compiling *, node *, int, node *);
  static void com_assign_name(struct compiling *, node *, int);
  static PyCodeObject *icompile(node *, struct compiling *);
! static PyCodeObject *jcompile(node *, char *, struct compiling *);
  static PyObject *parsestrplus(node *);
  static PyObject *parsestr(char *);
  static node *get_rawdocstring(node *);
--- 471,478 ----
  static void com_assign(struct compiling *, node *, int, node *);
  static void com_assign_name(struct compiling *, node *, int);
  static PyCodeObject *icompile(node *, struct compiling *);
! static PyCodeObject *jcompile(node *, char *, struct compiling *,
! 			      PySessionState *);
  static PyObject *parsestrplus(node *);
  static PyObject *parsestr(char *);
  static node *get_rawdocstring(node *);
***************
*** 3814,3822 ****
  }
  
  PyCodeObject *
! PyNode_Compile(node *n, char *filename)
  {
! 	return jcompile(n, filename, NULL);
  }
  
  struct symtable *
--- 3815,3823 ----
  }
  
  PyCodeObject *
! PyNode_Compile(node *n, char *filename, PySessionState *sess)
  {
! 	return jcompile(n, filename, NULL, sess);
  }
  
  struct symtable *
***************
*** 3844,3854 ****
  static PyCodeObject *
  icompile(node *n, struct compiling *base)
  {
! 	return jcompile(n, base->c_filename, base);
  }
  
  static PyCodeObject *
! jcompile(node *n, char *filename, struct compiling *base)
  {
  	struct compiling sc;
  	PyCodeObject *co;
--- 3845,3856 ----
  static PyCodeObject *
  icompile(node *n, struct compiling *base)
  {
! 	return jcompile(n, base->c_filename, base, NULL);
  }
  
  static PyCodeObject *
! jcompile(node *n, char *filename, struct compiling *base,
! 	 PySessionState *sess)
  {
  	struct compiling sc;
  	PyCodeObject *co;
***************
*** 3864,3870 ****
  	} else {
  		sc.c_private = NULL;
  		sc.c_future = PyNode_Future(n, filename);
! 		if (sc.c_future == NULL || symtable_build(&sc, n) < 0) {
  			com_free(&sc);
  			return NULL;
  		}
--- 3866,3882 ----
  	} else {
  		sc.c_private = NULL;
  		sc.c_future = PyNode_Future(n, filename);
! 		if (sc.c_future == NULL) {
! 			com_free(&sc);
! 			return NULL;
! 		}
! 		if (sess) {
! 			if (sess->ss_nested_scopes)
! 				sc.c_future->ff_nested_scopes = 1;
! 			else if (sc.c_future->ff_nested_scopes)
! 				sess->ss_nested_scopes = 1;
! 		}
! 		if (symtable_build(&sc, n) < 0) {
  			com_free(&sc);
  			return NULL;
  		}
Index: Python/import.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/import.c,v
retrieving revision 2.169
diff -c -r2.169 import.c
*** Python/import.c	2001/03/01 08:47:29	2.169
--- Python/import.c	2001/03/01 18:18:28
***************
*** 608,614 ****
  	n = PyParser_SimpleParseFile(fp, pathname, Py_file_input);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, pathname);
  	PyNode_Free(n);
  
  	return co;
--- 608,614 ----
  	n = PyParser_SimpleParseFile(fp, pathname, Py_file_input);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, pathname, NULL);
  	PyNode_Free(n);
  
  	return co;
Index: Python/pythonrun.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/pythonrun.c,v
retrieving revision 2.125
diff -c -r2.125 pythonrun.c
*** Python/pythonrun.c	2001/02/28 20:58:04	2.125
--- Python/pythonrun.c	2001/03/01 18:18:28
***************
*** 37,45 ****
  static void initmain(void);
  static void initsite(void);
  static PyObject *run_err_node(node *n, char *filename,
! 			      PyObject *globals, PyObject *locals);
  static PyObject *run_node(node *n, char *filename,
! 			  PyObject *globals, PyObject *locals);
  static PyObject *run_pyc_file(FILE *fp, char *filename,
  			      PyObject *globals, PyObject *locals);
  static void err_input(perrdetail *);
--- 37,47 ----
  static void initmain(void);
  static void initsite(void);
  static PyObject *run_err_node(node *n, char *filename,
! 			      PyObject *globals, PyObject *locals,
! 			      PySessionState *sess);
  static PyObject *run_node(node *n, char *filename,
! 			  PyObject *globals, PyObject *locals,
! 			  PySessionState *sess);
  static PyObject *run_pyc_file(FILE *fp, char *filename,
  			      PyObject *globals, PyObject *locals);
  static void err_input(perrdetail *);
***************
*** 56,62 ****
  extern void _PyCodecRegistry_Init(void);
  extern void _PyCodecRegistry_Fini(void);
  
- 
  int Py_DebugFlag; /* Needed by parser.c */
  int Py_VerboseFlag; /* Needed by import.c */
  int Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */
--- 58,63 ----
***************
*** 472,477 ****
--- 473,481 ----
  {
  	PyObject *v;
  	int ret;
+ 	PySessionState sess;
+ 
+ 	sess.ss_nested_scopes = 0;
  	v = PySys_GetObject("ps1");
  	if (v == NULL) {
  		PySys_SetObject("ps1", v = PyString_FromString(">>> "));
***************
*** 483,489 ****
  		Py_XDECREF(v);
  	}
  	for (;;) {
! 		ret = PyRun_InteractiveOne(fp, filename);
  #ifdef Py_REF_DEBUG
  		fprintf(stderr, "[%ld refs]\n", _Py_RefTotal);
  #endif
--- 487,493 ----
  		Py_XDECREF(v);
  	}
  	for (;;) {
! 		ret = PyRun_InteractiveOne(fp, filename, &sess);
  #ifdef Py_REF_DEBUG
  		fprintf(stderr, "[%ld refs]\n", _Py_RefTotal);
  #endif
***************
*** 497,503 ****
  }
  
  int
! PyRun_InteractiveOne(FILE *fp, char *filename)
  {
  	PyObject *m, *d, *v, *w;
  	node *n;
--- 501,507 ----
  }
  
  int
! PyRun_InteractiveOne(FILE *fp, char *filename, PySessionState *sess)
  {
  	PyObject *m, *d, *v, *w;
  	node *n;
***************
*** 537,543 ****
  	if (m == NULL)
  		return -1;
  	d = PyModule_GetDict(m);
! 	v = run_node(n, filename, d, d);
  	if (v == NULL) {
  		PyErr_Print();
  		return -1;
--- 541,547 ----
  	if (m == NULL)
  		return -1;
  	d = PyModule_GetDict(m);
! 	v = run_node(n, filename, d, d, sess);
  	if (v == NULL) {
  		PyErr_Print();
  		return -1;
***************
*** 907,913 ****
  PyRun_String(char *str, int start, PyObject *globals, PyObject *locals)
  {
  	return run_err_node(PyParser_SimpleParseString(str, start),
! 			    "<string>", globals, locals);
  }
  
  PyObject *
--- 911,917 ----
  PyRun_String(char *str, int start, PyObject *globals, PyObject *locals)
  {
  	return run_err_node(PyParser_SimpleParseString(str, start),
! 			    "<string>", globals, locals, NULL);
  }
  
  PyObject *
***************
*** 924,946 ****
  	node *n = PyParser_SimpleParseFile(fp, filename, start);
  	if (closeit)
  		fclose(fp);
! 	return run_err_node(n, filename, globals, locals);
  }
  
  static PyObject *
! run_err_node(node *n, char *filename, PyObject *globals, PyObject *locals)
  {
  	if (n == NULL)
  		return  NULL;
! 	return run_node(n, filename, globals, locals);
  }
  
  static PyObject *
! run_node(node *n, char *filename, PyObject *globals, PyObject *locals)
  {
  	PyCodeObject *co;
  	PyObject *v;
! 	co = PyNode_Compile(n, filename);
  	PyNode_Free(n);
  	if (co == NULL)
  		return NULL;
--- 928,957 ----
  	node *n = PyParser_SimpleParseFile(fp, filename, start);
  	if (closeit)
  		fclose(fp);
! 	return run_err_node(n, filename, globals, locals, NULL);
  }
  
  static PyObject *
! run_err_node(node *n, char *filename, PyObject *globals, PyObject *locals,
! 	     PySessionState *sess)
  {
  	if (n == NULL)
  		return  NULL;
! 	return run_node(n, filename, globals, locals, sess);
  }
  
  static PyObject *
! run_node(node *n, char *filename, PyObject *globals, PyObject *locals,
! 	 PySessionState *sess)
  {
  	PyCodeObject *co;
  	PyObject *v;
! 	if (sess) {
! 		fprintf(stderr, "session state: %d\n",
! 			sess->ss_nested_scopes);
! 	}
! 	/* XXX pass sess->ss_nested_scopes to PyNode_Compile */
! 	co = PyNode_Compile(n, filename, sess);
  	PyNode_Free(n);
  	if (co == NULL)
  		return NULL;
***************
*** 986,992 ****
  	n = PyParser_SimpleParseString(str, start);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, filename);
  	PyNode_Free(n);
  	return (PyObject *)co;
  }
--- 997,1003 ----
  	n = PyParser_SimpleParseString(str, start);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, filename, NULL);
  	PyNode_Free(n);
  	return (PyObject *)co;
  }


From guido@digicool.com  Thu Mar  1 18:34:53 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 13:34:53 -0500
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
In-Reply-To: Your message of "Thu, 01 Mar 2001 13:16:03 EST."
 <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103011834.NAA16957@cj20424-a.reston1.va.home.com>

> from __future__ import nested_scopes is accepted at the interactive
> interpreter prompt but has no effect beyond the line on which it was
> entered.  You could use it with lambdas entered following a
> semicolon, I guess.
> 
> I would rather see the future statement take effect for the remained
> of the interactive interpreter session.  I have included a first-cut
> patch below that makes this possible, using an object called
> PySessionState.  (I don't like the name, but don't have a better one;
> PyCompilerFlags?)
> 
> The idea of the session state is to record information about the state
> of an interactive session that may affect compilation.  The
> state object is created in PyRun_InteractiveLoop() and passed all the
> way through to PyNode_Compile().
> 
> Does this seem a reasonable approach?  Should I include it in the
> beta?  Any name suggestions.

I'm not keen on changing the prototypes for PyNode_Compile() and
PyRun_InteractiveOne().  I suspect that folks doing funky stuff might
be calling these directly.

Would it be a great pain to add ...Ex() versions that take a session
state, and have the old versions call this with a made-up dummy
session state?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Thu Mar  1 18:40:58 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 13:40:58 -0500
Subject: [Python-Dev] Finger error -- my apologies
In-Reply-To: Your message of "Thu, 01 Mar 2001 12:47:51 EST."
 <20010301124751.B24835@thyrsus.com>
References: <20010301124751.B24835@thyrsus.com>
Message-ID: <200103011840.NAA17088@cj20424-a.reston1.va.home.com>

> I meant to accept this patch, but I think I rejected it instead.
> Sorry, Ping.  Resubmit, plese, if I fooed up?

There's no need to resubmit -- you should be able to reset the state
any time.  I've changed it back to None so you can try again.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From esr@thyrsus.com  Thu Mar  1 18:58:57 2001
From: esr@thyrsus.com (Eric S. Raymond)
Date: Thu, 1 Mar 2001 13:58:57 -0500
Subject: [Python-Dev] Finger error -- my apologies
In-Reply-To: <200103011840.NAA17088@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 01, 2001 at 01:40:58PM -0500
References: <20010301124751.B24835@thyrsus.com> <200103011840.NAA17088@cj20424-a.reston1.va.home.com>
Message-ID: <20010301135857.D25553@thyrsus.com>

Guido van Rossum <guido@digicool.com>:
> > I meant to accept this patch, but I think I rejected it instead.
> > Sorry, Ping.  Resubmit, plese, if I fooed up?
> 
> There's no need to resubmit -- you should be able to reset the state
> any time.  I've changed it back to None so you can try again.

Done.

I also discovered that I wasn't quite the idiot I thought I had been; I
actually tripped over an odd little misfeature of Mozilla that other 
people working the patch queue should know about.

I saw "Rejected" after I thought I had clicked "Accepted" and thought
I had made both a mouse error and a thinko...

What actually happened was I clicked "Accepted" and then tried to page down
my browser.  Unfortunately the choice field was still selected -- and
guess what the last status value in the pulldown menu is, and
what the PgDn key does! :-)

Others should beware of this...
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Our society won't be truly free until "None of the Above" is always an option.


From tim.one@home.com  Thu Mar  1 19:11:14 2001
From: tim.one@home.com (Tim Peters)
Date: Thu, 1 Mar 2001 14:11:14 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <Pine.LNX.4.10.10103010909520.862-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBGJDAA.tim.one@home.com>

OTOH, seeing my name in a __credits__ blurb does nothing for my ego, it makes
me involuntarily shudder at having yet another potential source of extremely
urgent personal email from strangers who can't read <0.9 wink>.

So the question is, should __credits__nanny.py look for its file of names to
rip out via a magically named file or via cmdline argument?

or-maybe-a-gui!-ly y'rs  - tim



From Greg.Wilson@baltimore.com  Thu Mar  1 19:21:13 2001
From: Greg.Wilson@baltimore.com (Greg Wilson)
Date: Thu, 1 Mar 2001 14:21:13 -0500
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>

I'm working on Solaris, and have configured Python using
--with-cxx=g++.  I have a library "libenf.a", which depends
on several .so's (Eric Young's libeay and a couple of others).
I can't modify the library, but I'd like to wrap it so that
our QA group can write scripts to test it.

My C module was pretty simple to put together.  However, when
I load it, Python (or someone) complains that the symbols that
I know are in "libeay.so" are missing.  It's on LD_LIBRARY_PATH,
and "nm" shows that the symbols really are there.  So:

1. Do I have to do something special to allow Python to load
   .so's that extensions depend on?  If so, what?

2. Or do I have to load the .so myself prior to loading my
   extension?  If so, how?  Explicit "dlopen()" calls at the
   top of "init" don't work (presumably because the built-in
   loading has already decided that some symbols are missing).

Instead of offering a beer for the first correct answer this
time, I promise to write it up and send it to Fred Drake for
inclusion in the 2.1 release notes :-).

Thanks
Greg


From guido@digicool.com  Thu Mar  1 20:32:37 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 15:32:37 -0500
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: Your message of "Thu, 01 Mar 2001 11:37:53 EST."
 <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net>
References: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>
 <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103012032.PAA18322@cj20424-a.reston1.va.home.com>

> >>>>> "RT" == Robin Thomas <robin.thomas@starmedia.net> writes:
> 
>   RT> Using Python 2.0 on Win32. Am I the only person to be depressed
>   RT> by the following behavior now that __getitem__ does the work of
>   RT> __getslice__?

Jeremy:
> You may the only person to have tried it :-).
> 
>   RT> Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
>   >>> d = {}
>   >>> d[0:1] = 1
>   >>> d
>   {slice(0, 1, None): 1}
> 
> I think this should raise a TypeError (as you suggested later).

Me too, but it's such an unusual corner-case that I can't worry about
it too much.  The problem has to do with being backwards compatible --
we couldn't add the 3rd argument to the slice API that we wanted.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jeremy@alum.mit.edu  Thu Mar  1 20:58:24 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 15:58:24 -0500 (EST)
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
In-Reply-To: <200103011834.NAA16957@cj20424-a.reston1.va.home.com>
References: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103011834.NAA16957@cj20424-a.reston1.va.home.com>
Message-ID: <15006.47088.256265.467786@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

  GvR> I'm not keen on changing the prototypes for PyNode_Compile()
  GvR> and PyRun_InteractiveOne().  I suspect that folks doing funky
  GvR> stuff might be calling these directly.

  GvR> Would it be a great pain to add ...Ex() versions that take a
  GvR> session state, and have the old versions call this with a
  GvR> made-up dummy session state?

Doesn't seem like a big problem.  Any other issues with the approach?

Jeremy


From guido@digicool.com  Thu Mar  1 20:46:56 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 15:46:56 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: Your message of "Thu, 01 Mar 2001 17:53:43 +0100."
 <200103011653.RAA09025@core.inf.ethz.ch>
References: <200103011653.RAA09025@core.inf.ethz.ch>
Message-ID: <200103012046.PAA18395@cj20424-a.reston1.va.home.com>

> is the semantic (expressed through bytecode instrs) right?

Hi Samuele,

Thanks for bringing this up.  I agree with your predictions for these
examples, and have checked them in as part of the test_scope.py test
suite.  Fortunately Jeremy's code passes the test!

The rule is really pretty simple if you look at it through the right
glasses:

    To resolve a name, search from the inside out for either a scope
    that contains a global statement for that name, or a scope that
    contains a definition for that name (or both).

Thus, on the one hand the effect of a global statement is restricted
to the current scope, excluding nested scopes:

   def f():
       global x
       def g():
           x = 1 # new local

On the other hand, a name mentioned a global hides outer definitions
of the same name, and thus has an effect on nested scopes:

    def f():
       x = 1
       def g():
           global x
           def h():
               return x # global

We shouldn't code like this, but it's good to agree on what it should
mean when encountered!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jeremy@alum.mit.edu  Thu Mar  1 21:05:51 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 16:05:51 -0500 (EST)
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
 <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
 <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
 <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
Message-ID: <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>

> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
> 
> f()
> 
> prints 3, not 7.

I've been meaning to reply to your original post on this subject,
which actually addresses two different issues -- global and exec.  The
example above will fail with a SyntaxError in the nested_scopes
future, because of exec in the presence of a free variable.  The error
message is bad, because it says that exec is illegal in g because g
contains nested scopes.  I may not get to fix that before the beta.

The reasoning about the error here is, as usual with exec, that name
binding is a static or compile-time property of the program text.  The
use of hyper-dynamic features like import * and exec are not allowed
when they may interfere with static resolution of names.

Buy that?

Jeremy


From guido@digicool.com  Thu Mar  1 21:01:52 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 16:01:52 -0500
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: Your message of "Thu, 01 Mar 2001 15:54:55 EST."
 <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103012101.QAA18516@cj20424-a.reston1.va.home.com>

(Adding python-dev, keeping python-list)

> Quoth Robin Thomas <robin.thomas@starmedia.net>:
> | Using Python 2.0 on Win32. Am I the only person to be depressed by the 
> | following behavior now that __getitem__ does the work of __getslice__?
> |
> | Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
> |  >>> d = {}
> |  >>> d[0:1] = 1
> |  >>> d
> | {slice(0, 1, None): 1}
> |
> | And then, for more depression:
> |
> |  >>> d[0:1] = 2
> |  >>> d
> | {slice(0, 1, None): 1, slice(0, 1, None): 2}
> |
> | And then, for extra extra chagrin:
> |
> |  >>> print d[0:1]
> | Traceback (innermost last):
> |    File "<pyshell#11>", line 1, in ?
> |      d[0:1]
> | KeyError: slice(0, 1, None)
> 
> If it helps, you ruined my day.

Mine too. :-)

> | So, questions:
> |
> | 1) Is this behavior considered a bug by the BDFL or the community at large?

I can't speak for the community, but it smells like a bug to me.

> | If so, has a fix been conceived? Am I re-opening a long-resolved issue?

No, and no.

> | 2) If we're still open to proposed solutions, which of the following do you 
> | like:
> |
> |     a) make slices hash and cmp as their 3-tuple (start,stop,step),
> |        so that if I accidentally set a slice object as a key,
> |        I can at least re-set it or get it or del it :)

Good idea.  The SF patch manager is always open.

> |     b) have dict.__setitem__ expressly reject objects of SliceType
> |        as keys, raising your favorite in (TypeError, ValueError)

This is *also* a good idea.

> From: Donn Cave <donn@oz.net>
> 
> I think we might be able to do better.  I hacked in a quick fix
> in ceval.c that looks to me like it has the desired effect without
> closing the door to intentional slice keys (however unlikely.)
[...]
> *** Python/ceval.c.dist Thu Feb  1 14:48:12 2001
> --- Python/ceval.c      Wed Feb 28 21:52:55 2001
> ***************
> *** 3168,3173 ****
> --- 3168,3178 ----
>         /* u[v:w] = x */
>   {
>         int ilow = 0, ihigh = INT_MAX;
> +       if (u->ob_type->tp_as_mapping) {
> +               PyErr_SetString(PyExc_TypeError,
> +                       "dict object doesn't support slice assignment");
> +               return -1;
> +       }
>         if (!_PyEval_SliceIndex(v, &ilow))
>                 return -1;
>         if (!_PyEval_SliceIndex(w, &ihigh))

Alas, this isn't right.  It defeats the purpose completely: the whole
point was that you should be able to write a sequence class that
supports extended slices.  This uses __getitem__ and __setitem__, but
class instances have a nonzero tp_as_mapping pointer too!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From barry@digicool.com  Thu Mar  1 21:11:32 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 16:11:32 -0500
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>
Message-ID: <15006.47876.237152.882774@anthem.wooz.org>

>>>>> "GW" == Greg Wilson <Greg.Wilson@baltimore.com> writes:

    GW> I'm working on Solaris, and have configured Python using
    GW> --with-cxx=g++.  I have a library "libenf.a", which depends on
    GW> several .so's (Eric Young's libeay and a couple of others).  I
    GW> can't modify the library, but I'd like to wrap it so that our
    GW> QA group can write scripts to test it.

    GW> My C module was pretty simple to put together.  However, when
    GW> I load it, Python (or someone) complains that the symbols that
    GW> I know are in "libeay.so" are missing.  It's on
    GW> LD_LIBRARY_PATH, and "nm" shows that the symbols really are
    GW> there.  So:

    | 1. Do I have to do something special to allow Python to load
    |    .so's that extensions depend on?  If so, what?

Greg, it's been a while since I've worked on Solaris, but here's what
I remember.  This is all circa Solaris 2.5/2.6.

LD_LIBRARY_PATH only helps the linker find dynamic libraries at
compile/link time.  It's equivalent to the compiler's -L option.  It
does /not/ help the dynamic linker (ld.so) find your libraries at
run-time.  For that, you need LD_RUN_PATH or the -R option.  I'm of
the opinion that if you are specifying -L to the compiler, you should
always also specify -R, and that using -L/-R is always better than
LD_LIBRARY_PATH/LD_RUN_PATH (because the former is done by the person
doing the install and the latter is a burden imposed on all your
users).

There's an easy way to tell if your .so's are going to give you
problems.  Run `ldd mymodule.so' and see what the linker shows for the
dependencies.  If ldd can't find a dependency, it'll tell you,
otherwise, it show you the path to the dependent .so files.  If ldd
has a problem, you'll have a problem when you try to import it.

IIRC, distutils had a problem in this regard a while back, but these
days it seems to Just Work for me on Linux.  However, Linux is
slightly different in that there's a file /etc/ld.so.conf that you can
use to specify additional directories for ld.so to search at run-time,
so it can be fixed "after the fact".

    GW> Instead of offering a beer for the first correct answer this
    GW> time, I promise to write it up and send it to Fred Drake for
    GW> inclusion in the 2.1 release notes :-).

Oh no you don't!  You don't get off that easily.  See you next
week. :)

-Barry


From barry@digicool.com  Thu Mar  1 21:21:37 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 16:21:37 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
References: <200103011653.RAA09025@core.inf.ethz.ch>
 <200103012046.PAA18395@cj20424-a.reston1.va.home.com>
Message-ID: <15006.48481.807174.69908@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

    GvR>     To resolve a name, search from the inside out for either
    GvR> a scope that contains a global statement for that name, or a
    GvR> scope that contains a definition for that name (or both).

I think that's an excellent rule Guido -- hopefully it's captured
somewhere in the docs. :)  I think it yields behavior that both easily
discovered by visual code inspection and easily understood.

-Barry


From greg@cosc.canterbury.ac.nz  Thu Mar  1 21:54:45 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 02 Mar 2001 10:54:45 +1300 (NZDT)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <15006.32624.826559.907667@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103012154.KAA02307@s454.cosc.canterbury.ac.nz>

Jeremy:

> My preference is that global kills the local-decl only in one scope.

I agree, because otherwise there would be no way of
*undoing* the effect of a global in an outer scope.

The way things are, I can write a function

  def f():
    x = 3
    return x

and be assured that x will always be local, no matter what
environment I move the function into. I like this property.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From thomas@xs4all.net  Thu Mar  1 22:04:22 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 23:04:22 +0100
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
In-Reply-To: <15006.47876.237152.882774@anthem.wooz.org>; from barry@digicool.com on Thu, Mar 01, 2001 at 04:11:32PM -0500
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com> <15006.47876.237152.882774@anthem.wooz.org>
Message-ID: <20010301230422.M9678@xs4all.nl>

On Thu, Mar 01, 2001 at 04:11:32PM -0500, Barry A. Warsaw wrote:

>     | 1. Do I have to do something special to allow Python to load
>     |    .so's that extensions depend on?  If so, what?

> Greg, it's been a while since I've worked on Solaris, but here's what
> I remember.  This is all circa Solaris 2.5/2.6.

It worked the same way in SunOS 4.x, I believe.

> I'm of the opinion that if you are specifying -L to the compiler, you
> should always also specify -R, and that using -L/-R is always better than
> LD_LIBRARY_PATH/LD_RUN_PATH (because the former is done by the person
> doing the install and the latter is a burden imposed on all your users).

FWIW, I concur with the entire story. In my experience it's pretty
SunOS/Solaris specific (in fact, I long wondered why one of my C books spent
so much time explaining -R/-L, even though it wasn't necessary on my
platforms of choice at that time ;) but it might also apply to other
Solaris-inspired shared-library environments (HP-UX ? AIX ? IRIX ?)

> IIRC, distutils had a problem in this regard a while back, but these
> days it seems to Just Work for me on Linux.  However, Linux is
> slightly different in that there's a file /etc/ld.so.conf that you can
> use to specify additional directories for ld.so to search at run-time,
> so it can be fixed "after the fact".

BSDI uses the same /etc/ld.so.conf mechanism. However, LD_LIBRARY_PATH does
get used on linux, BSDI and IIRC FreeBSD as well, but as a runtime
environment variable. The /etc/ld.so.conf file gets compiled into a cache of
available libraries using 'ldconf'. On FreeBSD, there is no
'/etc/ld.so.conf' file; instead, you use 'ldconfig -m <path>' to add <path>
to the current cache, and add or modify the definition of
${ldconfig_path} in /etc/rc.conf. (which is used in the bootup procedure to
create a new cache, in case the old one was f'd up.)

I imagine OpenBSD and NetBSD are based off of FreeBSD, not BSDI. (BSDI was
late in adopting ELF, and obviously based most of it on Linux, for some
reason.)

I-wonder-how-it-works-on-Windows-ly y'rs,

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From barry@digicool.com  Thu Mar  1 22:12:27 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 17:12:27 -0500
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>
 <15006.47876.237152.882774@anthem.wooz.org>
 <20010301230422.M9678@xs4all.nl>
Message-ID: <15006.51531.427250.884726@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    >> Greg, it's been a while since I've worked on Solaris, but
    >> here's what I remember.  This is all circa Solaris 2.5/2.6.

    TW> It worked the same way in SunOS 4.x, I believe.

Ah, yes, I remember SunOS 4.x.  Remember SunOS 3.5 and earlier?  Or
even the Sun 1's?  :) NIST/NBS had at least one of those boxes still
rattling around when I left.  IIRC, it ran our old newserver for
years.

good-old-days-ly y'rs,
-Barry


From thomas@xs4all.net  Thu Mar  1 22:21:07 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 23:21:07 +0100
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: <200103012101.QAA18516@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 01, 2001 at 04:01:52PM -0500
References: <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net> <200103012101.QAA18516@cj20424-a.reston1.va.home.com>
Message-ID: <20010301232107.O9678@xs4all.nl>

On Thu, Mar 01, 2001 at 04:01:52PM -0500, Guido van Rossum wrote:
> > Quoth Robin Thomas <robin.thomas@starmedia.net>:

[ Dicts accept slice objects as keys in assignment, but not in retrieval ]

> > | 1) Is this behavior considered a bug by the BDFL or the community at large?

> I can't speak for the community, but it smells like a bug to me.

Speaking for the person who implemented the slice-fallback to sliceobjects:
yes, it's a bug, because it's an unintended consequence of the change :) The
intention was to eradicate the silly discrepancy between indexing, normal
slices and extended slices: normal indexing works through __getitem__,
sq_item and mp_subscript. Normal (two argument) slices work through
__getslice__ and sq_slice. Extended slices work through __getitem__, sq_item
and mp_subscript again.

Note, however, that though *this* particular bug is new in Python 2.0, it
wasn't actually absent in 1.5.2 either!

Python 1.5.2 (#0, Feb 20 2001, 23:57:58)  [GCC 2.95.3 20010125 (prerelease)]
on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> d = {}
>>> d[0:1] = "spam"
Traceback (innermost last):
  File "<stdin>", line 1, in ?
TypeError: object doesn't support slice assignment
>>> d[0:1:1] = "spam"
>>> d[0:1:] = "spam"
>>> d
{slice(0, 1, None): 'spam', slice(0, 1, 1): 'spam'}

The bug is just extended to cover normal slices as well, because the absense
of sq_slice now causes Python to fall back to normal item setting/retrieval.

I think making slices hashable objects makes the most sense. They can just
be treated as a three-tuple of the values in the slice, or some such.
Falling back to just sq_item/__getitem__ and not mp_subscript might make
some sense, but it seems a bit of an artificial split, since classes that
pretend to be mappings would be treated differently than types that pretend
to be mappings.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim.one@home.com  Thu Mar  1 22:37:35 2001
From: tim.one@home.com (Tim Peters)
Date: Thu, 1 Mar 2001 17:37:35 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <15006.48481.807174.69908@anthem.wooz.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com>

> >>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:
>
>     GvR>     To resolve a name, search from the inside out for either
>     GvR> a scope that contains a global statement for that name, or a
>     GvR> scope that contains a definition for that name (or both).
>
[Barry A. Warsaw]
> I think that's an excellent rule Guido --

Hmm.  After an hour of consideration, I would agree, provided only that the
rule also say you *stop* upon finding the first one <wink>.

> hopefully it's captured somewhere in the docs. :)

The python-dev archives are incorporated into the docs by implicit reference.

you-found-it-you-fix-it-ly y'rs  - tim



From martin@loewis.home.cs.tu-berlin.de  Thu Mar  1 22:39:01 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 1 Mar 2001 23:39:01 +0100
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
Message-ID: <200103012239.f21Md1i01641@mira.informatik.hu-berlin.de>

> I have a library "libenf.a", which depends on several .so's (Eric
> Young's libeay and a couple of others).

> My C module was pretty simple to put together.  However, when I load
> it, Python (or someone) complains that the symbols that I know are
> in "libeay.so" are missing.

If it says that the symbols are missing, it is *not* a problem of
LD_LIBRARY_PATH, LD_RUN_PATH (I can't find documentation or mentioning
of that variable anywhere...), or the -R option.

Instead, the most likely cause is that you forgot to link the .so when
linking the extension module. I.e. you should do

gcc -o foomodule.so foomodule.o -lenf -leay

If you omit the -leay, you get a shared object which will report
missing symbols when being loaded, except when the shared library was
loaded already for some other reason.

If you *did* specify -leay, it still might be that the symbols are not
available in the shared library. You said that nm displayed them, but
will nm still display them after you applied strip(1) to the library?
To see the symbols found by ld.so.1, you need to use the -D option of
nm(1).

Regards,
Martin


From jeremy@alum.mit.edu  Thu Mar  1 23:34:44 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 18:34:44 -0500 (EST)
Subject: [Python-Dev] nested scopes and future status
Message-ID: <15006.56468.16421.206413@w221.z064000254.bwi-md.dsl.cnc.net>

There are several loose ends in the nested scopes changes that I won't
have time to fix before the beta.  Here's a laundry list of tasks that
remain.  I don't think any of these is crucial for the release.
Holler if there's something you'd like me to fix tonight.

- Integrate the parsing of future statements into the _symtable
  module's interface.  This interface is new with 2.1 and
  undocumented, so it's deficiency here will not affect any code.

- Update traceback.py to understand SyntaxErrors that have a text
  attribute and an offset of None.  It should not print the caret.

- PyErr_ProgramText() should be called when an exception is printed
  rather than when it is raised.

- Fix pdb to support nested scopes.

- Produce a better error message/warning for code like this:
  def f(x):
      def g():
          exec ...
          print x
  The warning message should say that exec is not allowed in a nested
  function with free variables.  It currently says that g *contains* a
  nested function with free variables.

- Update the documentation.

Jeremy


From pedroni@inf.ethz.ch  Thu Mar  1 23:22:20 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Fri, 2 Mar 2001 00:22:20 +0100
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com><15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net><000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <004101c0a2a6$781cd440$f979fea9@newmexico>

Hi.


> > x=7
> > def f():
> >   global x
> >   def g():
> >     exec "x=3"
> >     return x
> >   print g()
> > 
> > f()
> > 
> > prints 3, not 7.
> 
> I've been meaning to reply to your original post on this subject,
> which actually addresses two different issues -- global and exec.  The
> example above will fail with a SyntaxError in the nested_scopes
> future, because of exec in the presence of a free variable.  The error
> message is bad, because it says that exec is illegal in g because g
> contains nested scopes.  I may not get to fix that before the beta.
> 
> The reasoning about the error here is, as usual with exec, that name
> binding is a static or compile-time property of the program text.  The
> use of hyper-dynamic features like import * and exec are not allowed
> when they may interfere with static resolution of names.
> 
> Buy that?
Yes I buy that. (I had tried it with the old a2)
So also this code will raise an error or I'm not understanding the point
and the error happens because of the global decl?

# top-level
def g():
  exec "x=3"
  return x

For me is ok, but that kills many naive uses of exec, I'm wondering if it 
does not make more sense to directly take the next big step and issue
an error (under future nested_scopes) for *all* the uses of exec without in.
Because every use of a builtin will cause the error...

regards



From jeremy@alum.mit.edu  Thu Mar  1 23:22:28 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 18:22:28 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <004101c0a2a6$781cd440$f979fea9@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
 <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
 <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
 <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
 <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
 <004101c0a2a6$781cd440$f979fea9@newmexico>
Message-ID: <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "SP" == Samuele Pedroni <pedroni@inf.ethz.ch> writes:

  SP> # top-level
  SP> def g():
  SP>   exec "x=3" 
  SP>   return x

At the top-level, there is no closure created by the enclosing scope
is not a function scope.  I believe that's the right thing to do,
except that the exec "x=3" also assign to the global.

I'm not sure if there is a strong justification for allowing this
form, except that it is the version of exec that is most likely to
occur in legacy code.

Jeremy


From guido@digicool.com  Fri Mar  2 02:17:38 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:17:38 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: Your message of "Thu, 01 Mar 2001 17:37:35 EST."
 <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com>
Message-ID: <200103020217.VAA19891@cj20424-a.reston1.va.home.com>

> > >>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:
> >
> >     GvR>     To resolve a name, search from the inside out for either
> >     GvR> a scope that contains a global statement for that name, or a
> >     GvR> scope that contains a definition for that name (or both).
> >
> [Barry A. Warsaw]
> > I think that's an excellent rule Guido --
> 
> Hmm.  After an hour of consideration,

That's quick -- it took me longer than that to come to the conclusion
that Jeremy had actually done the right thing. :-)

> I would agree, provided only that the
> rule also say you *stop* upon finding the first one <wink>.
> 
> > hopefully it's captured somewhere in the docs. :)
> 
> The python-dev archives are incorporated into the docs by implicit reference.
> 
> you-found-it-you-fix-it-ly y'rs  - tim

I'm sure the docs can stand some updates after the 2.1b1 crunch is
over to document what all we did.  After the conference!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Fri Mar  2 02:35:01 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:35:01 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 18:22:28 EST."
 <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico>
 <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103020235.VAA22273@cj20424-a.reston1.va.home.com>

> >>>>> "SP" == Samuele Pedroni <pedroni@inf.ethz.ch> writes:
> 
>   SP> # top-level
>   SP> def g():
>   SP>   exec "x=3" 
>   SP>   return x
> 
> At the top-level, there is no closure created by the enclosing scope
> is not a function scope.  I believe that's the right thing to do,
> except that the exec "x=3" also assign to the global.
> 
> I'm not sure if there is a strong justification for allowing this
> form, except that it is the version of exec that is most likely to
> occur in legacy code.

Unfortunately this used to work, using a gross hack: when an exec (or
import *) was present inside a function, the namespace semantics *for
that function* was changed to the pre-0.9.1 semantics, where all names
are looked up *at run time* first in the locals then in the globals
and then in the builtins.

I don't know how common this is -- it's pretty fragile.  If there's a
great clamor, we can put this behavior back after b1 is released.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Fri Mar  2 02:43:34 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:43:34 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 21:35:01 EST."
 <200103020235.VAA22273@cj20424-a.reston1.va.home.com>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103020235.VAA22273@cj20424-a.reston1.va.home.com>
Message-ID: <200103020243.VAA24384@cj20424-a.reston1.va.home.com>

> >   SP> # top-level
> >   SP> def g():
> >   SP>   exec "x=3" 
> >   SP>   return x

[me]
> Unfortunately this used to work, using a gross hack: when an exec (or
> import *) was present inside a function, the namespace semantics *for
> that function* was changed to the pre-0.9.1 semantics, where all names
> are looked up *at run time* first in the locals then in the globals
> and then in the builtins.
> 
> I don't know how common this is -- it's pretty fragile.  If there's a
> great clamor, we can put this behavior back after b1 is released.

I spoke too soon.  It just works in the latest 2.1b1.  Or am I missing
something?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From ping@lfw.org  Fri Mar  2 02:50:41 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 18:50:41 -0800 (PST)
Subject: [Python-Dev] Re: Is outlawing-nested-import-* only an implementation issue?
In-Reply-To: <14998.33979.566557.956297@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <Pine.LNX.4.10.10102241727410.13155-100000@localhost>

On Fri, 23 Feb 2001, Jeremy Hylton wrote:
> I think the meaning of print x should be statically determined.  That
> is, the programmer should be able to determine the binding environment
> in which x will be resolved (for print x) by inspection of the code.

I haven't had time in a while to follow up on this thread, but i just
wanted to say that i think this is a reasonable and sane course of
action.  I see the flaws in the model i was advocating, and i'm sorry
for consuming all that time in the discussion.


-- ?!ng


Post Scriptum:

On Fri, 23 Feb 2001, Jeremy Hylton wrote:
>   KPY> I tried STk Scheme, guile, and elisp, and they all do this.
> 
> I guess I'm just dense then.  Can you show me an example?

The example is pretty much exactly what you wrote:

    (define (f)
        (eval '(define y 2))
        y)

It produced 2.

But several sources have confirmed that this is just bad implementation
behaviour, so i'm willing to consider that a red herring.  Believe it
or not, in some Schemes, the following actually happens!

            STk> (define x 1)
            x
            STk> (define (func flag)
                     (if flag (define x 2))
                     (lambda () (set! x 3)))
            func
            STk> ((func #t))
            STk> x
            1
            STk> ((func #f))
            STk> x
            3

More than one professor that i showed the above to screamed.




From jeremy@alum.mit.edu  Fri Mar  2 01:12:37 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 20:12:37 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <200103020243.VAA24384@cj20424-a.reston1.va.home.com>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
 <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
 <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
 <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
 <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
 <004101c0a2a6$781cd440$f979fea9@newmexico>
 <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103020235.VAA22273@cj20424-a.reston1.va.home.com>
 <200103020243.VAA24384@cj20424-a.reston1.va.home.com>
Message-ID: <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

  >> >   SP> # top-level
  >> >   SP> def g():
  >> >   SP>   exec "x=3" return x

  GvR> [me]
  >> Unfortunately this used to work, using a gross hack: when an exec
  >> (or import *) was present inside a function, the namespace
  >> semantics *for that function* was changed to the pre-0.9.1
  >> semantics, where all names are looked up *at run time* first in
  >> the locals then in the globals and then in the builtins.
  >>
  >> I don't know how common this is -- it's pretty fragile.  If
  >> there's a great clamor, we can put this behavior back after b1 is
  >> released.

  GvR> I spoke too soon.  It just works in the latest 2.1b1.  Or am I
  GvR> missing something?

The nested scopes rules don't kick in until you've got one function
nested in another.  The top-level namespace is treated differently
that other function namespaces.  If a function is defined at the
top-level then all its free variables are globals.  As a result, the
old rules still apply.

Since class scopes are ignored for nesting, methods defined in
top-level classes are handled the same way.

I'm not completely sure this makes sense, although it limits code
breakage; most functions are defined at the top-level or in classes!
I think it is fairly clear, though.

Jeremy


From guido@digicool.com  Fri Mar  2 03:04:19 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 22:04:19 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 20:12:37 EST."
 <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> <200103020235.VAA22273@cj20424-a.reston1.va.home.com> <200103020243.VAA24384@cj20424-a.reston1.va.home.com>
 <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103020304.WAA24620@cj20424-a.reston1.va.home.com>

>   >> >   SP> # top-level
>   >> >   SP> def g():
>   >> >   SP>   exec "x=3" return x
> 
>   GvR> [me]
>   >> Unfortunately this used to work, using a gross hack: when an exec
>   >> (or import *) was present inside a function, the namespace
>   >> semantics *for that function* was changed to the pre-0.9.1
>   >> semantics, where all names are looked up *at run time* first in
>   >> the locals then in the globals and then in the builtins.
>   >>
>   >> I don't know how common this is -- it's pretty fragile.  If
>   >> there's a great clamor, we can put this behavior back after b1 is
>   >> released.
> 
>   GvR> I spoke too soon.  It just works in the latest 2.1b1.  Or am I
>   GvR> missing something?
> 
> The nested scopes rules don't kick in until you've got one function
> nested in another.  The top-level namespace is treated differently
> that other function namespaces.  If a function is defined at the
> top-level then all its free variables are globals.  As a result, the
> old rules still apply.

This doesn't make sense.  If the free variables were truely considered
globals, the reference to x would raise a NameError, because the exec
doesn't define it at the global level -- it defines it at the local
level.  So apparently you are generating LOAD_NAME instead of
LOAD_GLOBAL for free variables in toplevel functions.  Oh well, this
does the job!

> Since class scopes are ignored for nesting, methods defined in
> top-level classes are handled the same way.
> 
> I'm not completely sure this makes sense, although it limits code
> breakage; most functions are defined at the top-level or in classes!
> I think it is fairly clear, though.

Yeah, it's pretty unlikely that there will be much code breakage of
this form:

def f():
    def g():
        exec "x = 1"
        return x

(Hm, trying this I see that it generates a warning, but with the wrong
filename.  I'll see if I can use symtable_warn() here.)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jeremy@alum.mit.edu  Fri Mar  2 01:31:28 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 20:31:28 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <200103020304.WAA24620@cj20424-a.reston1.va.home.com>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
 <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
 <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
 <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
 <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
 <004101c0a2a6$781cd440$f979fea9@newmexico>
 <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103020235.VAA22273@cj20424-a.reston1.va.home.com>
 <200103020243.VAA24384@cj20424-a.reston1.va.home.com>
 <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103020304.WAA24620@cj20424-a.reston1.va.home.com>
Message-ID: <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

  >> The nested scopes rules don't kick in until you've got one
  >> function nested in another.  The top-level namespace is treated
  >> differently that other function namespaces.  If a function is
  >> defined at the top-level then all its free variables are globals.
  >> As a result, the old rules still apply.

  GvR> This doesn't make sense.  If the free variables were truely
  GvR> considered globals, the reference to x would raise a NameError,
  GvR> because the exec doesn't define it at the global level -- it
  GvR> defines it at the local level.  So apparently you are
  GvR> generating LOAD_NAME instead of LOAD_GLOBAL for free variables
  GvR> in toplevel functions.  Oh well, this does the job!

Actually, I only generate LOAD_NAME for unoptimized, top-level
function namespaces.  These are exactly the old rules and I avoided
changing them for top-level functions, except when they contained a
nested function.

If we eliminate exec without "in," this is yet another problem that
goes away.

Jeremy


From guido@digicool.com  Fri Mar  2 04:07:16 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 23:07:16 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 20:31:28 EST."
 <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> <200103020235.VAA22273@cj20424-a.reston1.va.home.com> <200103020243.VAA24384@cj20424-a.reston1.va.home.com> <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net> <200103020304.WAA24620@cj20424-a.reston1.va.home.com>
 <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103020407.XAA30061@cj20424-a.reston1.va.home.com>

[Jeremy]
>   >> The nested scopes rules don't kick in until you've got one
>   >> function nested in another.  The top-level namespace is treated
>   >> differently that other function namespaces.  If a function is
>   >> defined at the top-level then all its free variables are globals.
>   >> As a result, the old rules still apply.
> 
>   GvR> This doesn't make sense.  If the free variables were truely
>   GvR> considered globals, the reference to x would raise a NameError,
>   GvR> because the exec doesn't define it at the global level -- it
>   GvR> defines it at the local level.  So apparently you are
>   GvR> generating LOAD_NAME instead of LOAD_GLOBAL for free variables
>   GvR> in toplevel functions.  Oh well, this does the job!

[Jeremy]
> Actually, I only generate LOAD_NAME for unoptimized, top-level
> function namespaces.  These are exactly the old rules and I avoided
> changing them for top-level functions, except when they contained a
> nested function.

Aha.

> If we eliminate exec without "in," this is yet another problem that
> goes away.

But that's for another release...  That will probably get a lot of
resistance from some category of users!

So it's fine for now.  Thanks, Jeremy!  Great job!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fredrik@effbot.org  Fri Mar  2 08:35:59 2001
From: fredrik@effbot.org (Fredrik Lundh)
Date: Fri, 2 Mar 2001 09:35:59 +0100
Subject: [Python-Dev] a small C style question
Message-ID: <05f101c0a2f3$cf4bae10$e46940d5@hagrid>

DEC's OpenVMS compiler are a bit pickier than most other compilers.
among other things, it correctly notices that the "code" variable in
this statement is an unsigned variable:

    UNICODEDATA:

        if (code < 0 || code >= 65536)
    ........^
    %CC-I-QUESTCOMPARE, In this statement, the unsigned 
    expression "code" is being compared with a relational
    operator to a constant whose value is not greater than
    zero.  This might not be what you intended.
    at line number 285 in file UNICODEDATA.C

the easiest solution would of course be to remove the "code < 0"
part, but code is a Py_UCS4 variable.  what if someone some day
changes Py_UCS4 to a 64-bit signed integer, for example?

what's the preferred style?

1) leave it as is, and let OpenVMS folks live with the
compiler complaint

2) get rid of "code < 0" and hope that nobody messes
up the Py_UCS4 declaration

3) cast "code" to a known unsigned type, e.g:

        if ((unsigned int) code >= 65536)

Cheers /F



From mwh21@cam.ac.uk  Fri Mar  2 12:58:49 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: Fri, 2 Mar 2001 12:58:49 +0000 (GMT)
Subject: [Python-Dev] python-dev summary, 2001-02-15 - 2001-03-01
Message-ID: <Pine.LNX.4.10.10103021255240.18596-100000@localhost.localdomain>

Thanks for all the positive feedback for the last summary!

 This is a summary of traffic on the python-dev mailing list between
 Feb 15 and Feb 28 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list@python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the second python-dev summary written by Michael Hudson.
 Previous summaries were written by Andrew Kuchling and can be found
 at:

   <http://www.amk.ca/python/dev/>

 New summaries will appear at:

  <http://starship.python.net/crew/mwh/summaries/>

 and will continue to be archived at Andrew's site.

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 400

       |                         ]|[                            
       |                         ]|[                            
    60 |                         ]|[                            
       |                         ]|[                            
       |                         ]|[                            
       |                         ]|[                     ]|[    
       |                         ]|[     ]|[             ]|[    
       |                         ]|[     ]|[             ]|[    
    40 |                         ]|[     ]|[             ]|[ ]|[
       |                         ]|[     ]|[             ]|[ ]|[
       |     ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
    20 | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[         ]|[ ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[         ]|[ ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
       | ]|[ ]|[     ]|[     ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
       | ]|[ ]|[     ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
     0 +-033-037-002-008-006-021-071-037-051-012-002-021-054-045
        Thu 15| Sat 17| Mon 19| Wed 21| Fri 23| Sun 25| Tue 27|
            Fri 16  Sun 18  Tue 20  Thu 22  Sat 24  Mon 26  Wed 28

 A slightly quieter week on python-dev.  As you can see, most Python
 developers are too well-adjusted to post much on weekends.  Or
 Mondays.

 There was a lot of traffic on the bugs, patches and checkins lists in
 preparation for the upcoming 2.1b1 release.


    * backwards incompatibility *

 Most of the posts in the large spike in the middle of the posting
 distribution were on the subject of backward compatibility.  On of
 the unexpected (by those of us that hadn't thought too hard about it)
 consequences of nested scopes was that some code using the dreaded
 "from-module-import-*" code inside functions became ambiguous, and
 the plan was to ban such code in Python 2.1.  This provoked a storm
 of protest from many quarters, including python-dev and
 comp.lang.python.  If you really want to read all of this, start
 here:

  <http://mail.python.org/pipermail/python-dev/2001-February/013003.html>

 However, as you will know if you read comp.lang.python, PythonLabs
 took note, and in:

  <http://mail.python.org/pipermail/python-dev/2001-February/013125.html>
 
 Guido announced that the new nested scopes behaviour would be opt-in
 in 2.1, but code that will break in python 2.2 when nested scopes
 become the default will produce a warning in 2.1.  To get the new
 behaviour in a module, one will need to put

    from __future__ import nested_scopes

 at the top of the module.  It is possible this gimmick will be used
 to introduce further backwards compatible features in the future.


    * obmalloc *

 After some more discussion, including Neil Schemenauer pointing out
 that obmalloc might enable him to make the cycle GC faster, obmalloc
 was finally checked in.

 There's a second patch from Vladimir Marangoz implementing a memory
 profiler.  (sorry for the long line)

  <http://sourceforge.net/tracker/index.php?func=detail&aid=401229&group_id=5470&atid=305470>

 Opinion was muted about this; as Neil summed up in:

  <http://mail.python.org/pipermail/python-dev/2001-February/013205.html>

 noone cares enough to put the time into it and review this patch.
 Sufficiently violently wielded opnions may swing the day...


    * pydoc *

 Ka-Ping Yee checked in his amazing pydoc.  pydoc was described in

  <http://mail.python.org/pipermail/python-dev/2001-January/011538.html>

 It gives command line and web browser access to Python's
 documentation, and will be installed as a separate script in 2.1.


    * other stuff *

 It is believed that the case-sensitive import issues mentioned in the
 last summary have been sorted out, although it will be hard to be
 sure until the beta.

 The unit-test discussion petered out.  Nothing has been checked in
 yet.

 The iteraators discussion seems to have disappeared.  At least, your
 author can't find it!

Cheers,
M.



From guido@digicool.com  Fri Mar  2 14:22:27 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 09:22:27 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
Message-ID: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>

I was tickled when I found a quote from Tim Berners-Lee about Python
here: http://www.w3.org/2000/10/swap/#L88

Most quotable part: "Python is a language you can get into on one
battery!"

We should be able to use that for PR somewhere...

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mwh21@cam.ac.uk  Fri Mar  2 14:32:01 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 02 Mar 2001 14:32:01 +0000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: "A.M. Kuchling"'s message of "Wed, 28 Feb 2001 12:55:12 -0800"
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk>

"A.M. Kuchling" <akuchling@users.sourceforge.net> writes:

> --- NEW FILE: pydoc ---
> #!/usr/bin/env python
> 

Could I make a request that this gets munged to point to the python
that's being installed at build time?  I've just built from CVS,
installed in /usr/local, and:

$ pydoc -g
Traceback (most recent call last):
  File "/usr/local/bin/pydoc", line 3, in ?
    import pydoc
ImportError: No module named pydoc

because the /usr/bin/env python thing hits the older python in /usr
first.

Don't bother if this is actually difficult.

Cheers,
M.



From guido@digicool.com  Fri Mar  2 14:34:37 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 09:34:37 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: Your message of "02 Mar 2001 14:32:01 GMT."
 <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk>
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net>
 <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>

> "A.M. Kuchling" <akuchling@users.sourceforge.net> writes:
> 
> > --- NEW FILE: pydoc ---
> > #!/usr/bin/env python
> > 
> 
> Could I make a request that this gets munged to point to the python
> that's being installed at build time?  I've just built from CVS,
> installed in /usr/local, and:
> 
> $ pydoc -g
> Traceback (most recent call last):
>   File "/usr/local/bin/pydoc", line 3, in ?
>     import pydoc
> ImportError: No module named pydoc
> 
> because the /usr/bin/env python thing hits the older python in /usr
> first.
> 
> Don't bother if this is actually difficult.

This could become a standard distutils feature!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From akuchlin@mems-exchange.org  Fri Mar  2 14:56:17 2001
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 2 Mar 2001 09:56:17 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:34:37AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com>
Message-ID: <20010302095617.A11182@ute.cnri.reston.va.us>

On Fri, Mar 02, 2001 at 09:34:37AM -0500, Guido van Rossum wrote:
>> because the /usr/bin/env python thing hits the older python in /usr
>> first.
>> Don't bother if this is actually difficult.
>
>This could become a standard distutils feature!

It already does this for regular distributions (see build_scripts.py),
but running with a newly built Python causes problems; it uses
sys.executable, which results in '#!python' at build time.  I'm not
sure how to fix this; perhaps the Makefile should always set a
BUILDING_PYTHON environment variable, and the Distutils could check
for its being set.  

--amk



From nas@arctrix.com  Fri Mar  2 15:03:00 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 2 Mar 2001 07:03:00 -0800
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302095617.A11182@ute.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Mar 02, 2001 at 09:56:17AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302095617.A11182@ute.cnri.reston.va.us>
Message-ID: <20010302070300.B11722@glacier.fnational.com>

On Fri, Mar 02, 2001 at 09:56:17AM -0500, Andrew Kuchling wrote:
> It already does this for regular distributions (see build_scripts.py),
> but running with a newly built Python causes problems; it uses
> sys.executable, which results in '#!python' at build time.  I'm not
> sure how to fix this; perhaps the Makefile should always set a
> BUILDING_PYTHON environment variable, and the Distutils could check
> for its being set.  

setup.py fix this by assigning sys.executable to $(prefix)/bin/python
before installing.  I don't know if that would break anything
else though.

  Neil


From DavidA@ActiveState.com  Fri Mar  2 01:05:59 2001
From: DavidA@ActiveState.com (David Ascher)
Date: Thu, 1 Mar 2001 17:05:59 -0800
Subject: [Python-Dev] Finally, a Python Cookbook!
Message-ID: <PLEJJNOHDIGGLDPOGPJJOEOKCNAA.DavidA@ActiveState.com>

Hello all --

ActiveState is now hosting a site
(http://www.ActiveState.com/PythonCookbook) that will be the beginning of=
 a
series of community-based language-specific cookbooks to be jointly
sponsored by ActiveState and O'Reilly.

The first in the series is the "Python Cookbook".  We will be announcing
this effort at the Python Conference, but wanted to give you a sneak peek=
 at
it ahead of time.

The idea behind it is for it to be a managed open collaborative repositor=
y
of Python recipes that implements RCD (rapid content development) for a
cookbook that O'Reilly will eventually publish. The Python Cookbook will =
be
freely available for review and use by all. It will also be different tha=
n
any other project of its kind in one very important way. This will be a
community effort. A book written by the Python community and delivered to
the Python Community, as a handy reference and invaluable aid for those
still to join. The partnership of ActiveState and O=92Reilly provide the
framework, the organization, and the resources necessary to help bring th=
is
book to life.

If you've got the time, please dig in your code base for recipes which yo=
u
may have developed and consider contributing them.  That way, you'll help=
 us
'seed' the cookbook for its launch at the 9th Python Conference on March
5th!

Whether you have the time to contribute or not, we'd appreciate it if you
registered, browsed the site and gave us feedback at
pythoncookbook@ActiveState.com.

We want to make sure that this site reflects the community's needs, so al=
l
feedback is welcome.

Thanks in advance for all your efforts in making this a successful endeav=
or.

Thanks,

David Ascher & the Cookbook team
ActiveState - Perl Python Tcl XSLT - Programming for the People

Vote for Your Favorite Perl & Python Programming
Accomplishments in the first Active Awards!
>>http://www.ActiveState.com/Awards  <http://www.activestate.com/awards><=
<



From gward@cnri.reston.va.us  Fri Mar  2 16:10:53 2001
From: gward@cnri.reston.va.us (Greg Ward)
Date: Fri, 2 Mar 2001 11:10:53 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:34:37AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com>
Message-ID: <20010302111052.A14221@thrak.cnri.reston.va.us>

On 02 March 2001, Guido van Rossum said:
> This could become a standard distutils feature!

It is -- if a script is listed in 'scripts' in setup.py, and it's a Python
script, its #! line is automatically munged to point to the python that's
running the setup script.

Hmmm, this could be a problem if that python hasn't been installed itself
yet.  IIRC, it just trusts sys.executable.

        Greg


From tim.one@home.com  Fri Mar  2 16:27:43 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 2 Mar 2001 11:27:43 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com>

[Guido]
> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88
>
> Most quotable part: "Python is a language you can get into on one
> battery!"

Most baffling part:  "One day, 15 minutes before I had to leave for the
airport, I got my laptop back out of my bag, and sucked off the web the
python 1.6 system ...".  What about python.org steered people toward 1.6?  Of
course, Tim *is* a Tim, and they're not always rational ...




From guido@digicool.com  Fri Mar  2 16:28:59 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 11:28:59 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of "Fri, 02 Mar 2001 11:27:43 EST."
 <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com>
Message-ID: <200103021628.LAA07147@cj20424-a.reston1.va.home.com>

> [Guido]
> > I was tickled when I found a quote from Tim Berners-Lee about Python
> > here: http://www.w3.org/2000/10/swap/#L88
> >
> > Most quotable part: "Python is a language you can get into on one
> > battery!"
> 
> Most baffling part:  "One day, 15 minutes before I had to leave for the
> airport, I got my laptop back out of my bag, and sucked off the web the
> python 1.6 system ...".  What about python.org steered people toward 1.6?  Of
> course, Tim *is* a Tim, and they're not always rational ...

My guess is this was before 2.0 final was released.  I don't blame
him.  And after all, he's a Tim -- he can do what he wants to! :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From thomas.heller@ion-tof.com  Fri Mar  2 16:38:04 2001
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Fri, 2 Mar 2001 17:38:04 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us>
Message-ID: <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>

Gred Ward, who suddenly reappears:
> On 02 March 2001, Guido van Rossum said:
> > This could become a standard distutils feature!
> 
> It is -- if a script is listed in 'scripts' in setup.py, and it's a Python
> script, its #! line is automatically munged to point to the python that's
> running the setup script.
> 
What about this code in build_scripts.py?

  # check if Python is called on the first line with this expression.
  # This expression will leave lines using /usr/bin/env alone; presumably
  # the script author knew what they were doing...)
  first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')

Doesn't this mean that
#!/usr/bin/env python
lines are NOT fixed?

Thomas



From gward@python.net  Fri Mar  2 16:41:24 2001
From: gward@python.net (Greg Ward)
Date: Fri, 2 Mar 2001 11:41:24 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302070300.B11722@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 02, 2001 at 07:03:00AM -0800
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302095617.A11182@ute.cnri.reston.va.us> <20010302070300.B11722@glacier.fnational.com>
Message-ID: <20010302114124.A2826@cthulhu.gerg.ca>

On 02 March 2001, Neil Schemenauer said:
> setup.py fix this by assigning sys.executable to $(prefix)/bin/python
> before installing.  I don't know if that would break anything
> else though.

That *should* work.  Don't think Distutils relies on
"os.path.exists(sys.executable)" anywhere....

...oops, may have spoken too soon: the byte-compilation code (in
distutils/util.py) spawns sys.executable.  So if byte-compilation is
done in the same run as installing scripts, you lose.  Fooey.

        Greg
-- 
Greg Ward - just another /P(erl|ython)/ hacker          gward@python.net
http://starship.python.net/~gward/
Heisenberg may have slept here.


From gward@python.net  Fri Mar  2 16:47:39 2001
From: gward@python.net (Greg Ward)
Date: Fri, 2 Mar 2001 11:47:39 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>; from thomas.heller@ion-tof.com on Fri, Mar 02, 2001 at 05:38:04PM +0100
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>
Message-ID: <20010302114739.B2826@cthulhu.gerg.ca>

On 02 March 2001, Thomas Heller said:
> Gred Ward, who suddenly reappears:

"He's not dead, he's just resting!"

> What about this code in build_scripts.py?
> 
>   # check if Python is called on the first line with this expression.
>   # This expression will leave lines using /usr/bin/env alone; presumably
>   # the script author knew what they were doing...)
>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')

Hmm, that's a recent change:

  revision 1.7
  date: 2001/02/28 20:59:33;  author: akuchling;  state: Exp;  lines: +5 -3
  Leave #! lines featuring /usr/bin/env alone

> Doesn't this mean that
> #!/usr/bin/env python
> lines are NOT fixed?

Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
lines is the right thing to do?  I happen to think it's not; I think #!
lines should always be munged (assuming this is a Python script, of
course).

        Greg
-- 
Greg Ward - nerd                                        gward@python.net
http://starship.python.net/~gward/
Disclaimer: All rights reserved. Void where prohibited. Limit 1 per customer.


From akuchlin@mems-exchange.org  Fri Mar  2 16:54:59 2001
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Fri, 2 Mar 2001 11:54:59 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302114739.B2826@cthulhu.gerg.ca>; from gward@python.net on Fri, Mar 02, 2001 at 11:47:39AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook> <20010302114739.B2826@cthulhu.gerg.ca>
Message-ID: <20010302115459.A3029@ute.cnri.reston.va.us>

On Fri, Mar 02, 2001 at 11:47:39AM -0500, Greg Ward wrote:
>>   # check if Python is called on the first line with this expression.
>>   # This expression will leave lines using /usr/bin/env alone; presumably
>>   # the script author knew what they were doing...)
>>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')
>
>Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
>lines is the right thing to do?  I happen to think it's not; I think #!
>lines should always be munged (assuming this is a Python script, of
>course).

Disagree; as the comment says, "presumably the script author knew what
they were doing..." when they put /usr/bin/env at the top.  This had
to be done so that pydoc could be installed at all.

--amk


From guido@digicool.com  Fri Mar  2 17:01:50 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 12:01:50 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: Your message of "Fri, 02 Mar 2001 11:54:59 EST."
 <20010302115459.A3029@ute.cnri.reston.va.us>
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook> <20010302114739.B2826@cthulhu.gerg.ca>
 <20010302115459.A3029@ute.cnri.reston.va.us>
Message-ID: <200103021701.MAA07349@cj20424-a.reston1.va.home.com>

> >>   # check if Python is called on the first line with this expression.
> >>   # This expression will leave lines using /usr/bin/env alone; presumably
> >>   # the script author knew what they were doing...)
> >>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')
> >
> >Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
> >lines is the right thing to do?  I happen to think it's not; I think #!
> >lines should always be munged (assuming this is a Python script, of
> >course).
> 
> Disagree; as the comment says, "presumably the script author knew what
> they were doing..." when they put /usr/bin/env at the top.  This had
> to be done so that pydoc could be installed at all.

Don't understand the list sentence -- what started this thread is that
when pydoc is installed but there's another (older) installed python
that is first on $PATH, pydoc breaks.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From thomas@xs4all.net  Fri Mar  2 20:34:31 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 2 Mar 2001 21:34:31 +0100
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:22:27AM -0500
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
Message-ID: <20010302213431.Q9678@xs4all.nl>

On Fri, Mar 02, 2001 at 09:22:27AM -0500, Guido van Rossum wrote:

> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88

> Most quotable part: "Python is a language you can get into on one
> battery!"

Actually, I think this bit is more important:

"I remember Guido trying to persuade me to use python as I was trying to
persuade him to write web software!"

So when can we expect the new Python web interface ? :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@acm.org  Fri Mar  2 20:32:27 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 2 Mar 2001 15:32:27 -0500 (EST)
Subject: [Python-Dev] doc tree frozen for 2.1b1
Message-ID: <15008.859.4988.155789@localhost.localdomain>

  The documentation is frozen until the 2.1b1 annonucement goes out.
I have a couple of checkins to make, but the formatted HTML for the
Windows installer has already been cut & shipped.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From guido@digicool.com  Fri Mar  2 20:41:34 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 15:41:34 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of "Fri, 02 Mar 2001 21:34:31 +0100."
 <20010302213431.Q9678@xs4all.nl>
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
 <20010302213431.Q9678@xs4all.nl>
Message-ID: <200103022041.PAA12359@cj20424-a.reston1.va.home.com>

> Actually, I think this bit is more important:
> 
> "I remember Guido trying to persuade me to use python as I was trying to
> persuade him to write web software!"
> 
> So when can we expect the new Python web interface ? :-)

There's actually a bit of a sad story.  I really liked the early web,
and wrote one of the earliest graphical web browsers (before Mozilla;
I was using Python and stdwin).  But I didn't get the importance of
dynamic content, and initially scoffed at the original cgi.py,
concocted by Michael McLay (always a good nose for trends!) and Steven
Majewski (ditto).

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fdrake@acm.org  Fri Mar  2 20:49:09 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 2 Mar 2001 15:49:09 -0500 (EST)
Subject: [Python-Dev] Python 2.1 beta 1 documentation online
Message-ID: <15008.1861.84677.687041@localhost.localdomain>

  The documentation for Python 2.1 beta 1 is now online:

	http://python.sourceforge.net/devel-docs/

  This is the same as the documentation that will ship with the
Windows installer.
  This is the online location of the development version of the
documentation.  As I make updates to the documentation, this will be
updated periodically; the "front page" will indicate the date of the
most recent update.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From guido@digicool.com  Fri Mar  2 22:46:09 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 17:46:09 -0500
Subject: [Python-Dev] Python 2.1b1 released
Message-ID: <200103022246.RAA18529@cj20424-a.reston1.va.home.com>

With great pleasure I announce the release of Python 2.1b1.  This is a
big step towards the release of Python 2.1; the final release is
expected to take place in mid April.

Find out all about 2.1b1, including docs and downloads (Windows
installer and source tarball), at the 2.1 release page:

    http://www.python.org/2.1/


WHAT'S NEW?
-----------

For the big picture, see Andrew Kuchling's What New in Python 2.1:

    http://www.amk.ca/python/2.1/

For more detailed release notes, see SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=25924

The big news since 2.1a2 was released a month ago:

- Nested Scopes (PEP 227)[*] are now optional.  They must be enabled
  by including the statement "from __future__ import nested_scopes" at
  the beginning of a module (PEP 236).  Nested scopes will be a
  standard feature in Python 2.2.

- Compile-time warnings are now generated for a number of conditions
  that will break or change in meaning when nested scopes are enabled.

- The new tool *pydoc* displays module documentation, extracted from
  doc strings.  It works in a text environment as well as in a GUI
  environment (where it cooperates with a web browser).  On Windows,
  this is in the Start menu as "Module Docs".

- Case-sensitive import.  On systems with case-insensitive but
  case-preserving file systems, such as Windows (including Cygwin) and
  MacOS, import now continues to search the next directory on sys.path
  when a case mismatch is detected.  See PEP 235 for the full scoop.

- New platforms.  Python 2.1 now fully supports MacOS X, Cygwin, and
  RISCOS.

[*] For PEPs (Python Enhancement Proposals), see the PEP index:

    http://python.sourceforge.net/peps/

I hope to see you all next week at the Python9 conference in Long
Beach, CA:

    http://www.python9.org

--Guido van Rossum (home page: http://www.python.org/~guido/)


From aahz@panix.com  Sat Mar  3 18:21:44 2001
From: aahz@panix.com (aahz@panix.com)
Date: Sat, 3 Mar 2001 13:21:44 -0500 (EST)
Subject: [Python-Dev] Bug fix releases (was Re: Nested scopes resolution -- you can breathe again!)
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org>
Message-ID: <200103031821.NAA24060@panix3.panix.com>

[posted to c.l.py with cc to python-dev]

[I apologize for the delay in posting this, but it's taken me some time
to get my thoughts straight.  I hope that by posting this right before
IPC9 there'll be a chance to get some good discussion in person.]

In article <mailman.982897324.9109.python-list@python.org>,
Guido van Rossum  <guido@digicool.com> wrote:
>
>We have clearly underestimated how much code the nested scopes would
>break, but more importantly we have underestimated how much value our
>community places on stability.  

I think so, yes, on that latter clause.  I think perhaps it wasn't clear
at the time, but I believe that much of the yelling over "print >>" was
less over the specific design but because it came so close to the
release of 2.0 that there wasn't *time* to sit down and talk things
over rationally.

As I see it, there's a natural tension between between adding features
and delivering bug fixes.  Particularly because of Microsoft, I think
that upgrading to a feature release to get bug fixes has become anathema
to a lot of people, and I think that seeing features added or changed
close to a release reminds people too much of the Microsoft upgrade
treadmill.

>So here's the deal: we'll make nested scopes an optional feature in
>2.1, default off, selectable on a per-module basis using a mechanism
>that's slightly hackish but is guaranteed to be safe.  (See below.)
>
>At the same time, we'll augment the compiler to detect all situations
>that will break when nested scopes are introduced in the future, and
>issue warnings for those situations.  The idea here is that warnings
>don't break code, but encourage folks to fix their code so we can
>introduce nested scopes in 2.2.  Given our current pace of releases
>that should be about 6 months warning.

As some other people have pointed out, six months is actually a rather
short cycle when it comes to delivering enterprise applications across
hundreds or thousands of machines.  Notice how many people have said
they haven't upgraded from 1.5.2 yet!  Contrast that with the quickness
of the 1.5.1 to 1.5.2 upgrade.

I believe that "from __future__" is a good idea, but it is at best a
bandage over the feature/bug fix tension.  I think that the real issue
is that in the world of core Python development, release N is always a
future release, never the current release; as soon as release N goes out
the door into production, it immediately becomes release N-1 and forever
dead to development

Rather than change that mindset directly, I propose that we move to a
forked model of development.  During the development cycle for any given
release, release (N-1).1 is also a live target -- but strictly for bug
fixes.  I suggest that shortly after the release for Na1, there should
also be a release for (N-1).1b1; shortly after the release of Nb1, there
would be (N-1).1b2.  And (N-1).1 would be released shortly after N.

This means that each feature-based release gets one-and-only-one pure
bugfix release.  I think this will do much to promote the idea of Python
as a stable platform for application development.

There are a number of ways I can see this working, including setting up
a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
But I don't think this will work at all unless the PythonLabs team is at
least willing to "bless" the bugfix release.  Uncle Timmy has been known
to make snarky comments about forever maintaining 1.5.2; I think this is
a usable compromise that will take relatively little effort to keep
going once it's set up.

I think one key advantage of this approach is that a lot more people
will be willing to try out a beta of a strict bugfix release, so the
release N bugfixes will get more testing than they otherwise would.

If there's interest in this idea, I'll write it up as a formal PEP.

It's too late for my proposed model to work during the 2.1 release
cycle, but I think it would be an awfully nice gesture to the community
to take a month off after 2.1 to create 2.0.1, before going on to 2.2.



BTW, you should probably blame Fredrik for this idea.  ;-)  If he had
skipped providing 1.5.2 and 2.0 versions of sre, I probably wouldn't
have considered this a workable idea.  I was just thinking that it was
too bad there wasn't a packaged version of 2.0 containing the new sre,
and that snowballed into this.
-- 
                      --- Aahz (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be
-- 
                      --- Aahz (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be


From guido@digicool.com  Sat Mar  3 19:10:35 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 14:10:35 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 13:21:44 EST."
 <200103031821.NAA24060@panix3.panix.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org>
 <200103031821.NAA24060@panix3.panix.com>
Message-ID: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>

Aahz writes:
> [posted to c.l.py with cc to python-dev]
> 
> [I apologize for the delay in posting this, but it's taken me some time
> to get my thoughts straight.  I hope that by posting this right before
> IPC9 there'll be a chance to get some good discussion in person.]

Excellent.  Even in time for me to mention this in my keynote! :-)

> In article <mailman.982897324.9109.python-list@python.org>,
> Guido van Rossum  <guido@digicool.com> wrote:
> >
> >We have clearly underestimated how much code the nested scopes would
> >break, but more importantly we have underestimated how much value our
> >community places on stability.  
> 
> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
> at the time, but I believe that much of the yelling over "print >>" was
> less over the specific design but because it came so close to the
> release of 2.0 that there wasn't *time* to sit down and talk things
> over rationally.

In my eyes the issues are somewhat different: "print >>" couldn't
possibly break existing code; nested scopes clearly do, and that's why
we decided to use the __future__ statement.

But I understand that you're saying that the community has grown so
conservative that it can't stand new features even if they *are* fully
backwards compatible.

I wonder, does that extend to new library modules?  Is there also
resistance against the growth there?  I don't think so -- if anything,
people are clamoring for more stuff to become standard (while at the
same time I feel some pressure to cut dead wood, like the old SGI
multimedia modules).

So that relegates us at PythonLabs to a number of things: coding new
modules (boring), or trying to improve performance of the virtual
machine (equally boring, and difficult to boot), or fixing bugs (did I
mention boring? :-).

So what can we do for fun?  (Besides redesigning Zope, which is lots
of fun, but runs into the same issues.)

> As I see it, there's a natural tension between between adding features
> and delivering bug fixes.  Particularly because of Microsoft, I think
> that upgrading to a feature release to get bug fixes has become anathema
> to a lot of people, and I think that seeing features added or changed
> close to a release reminds people too much of the Microsoft upgrade
> treadmill.

Actually, I though that the Microsoft way these days was to smuggle
entire new subsystems into bugfix releases.  What else are "Service
Packs" for? :-)

> >So here's the deal: we'll make nested scopes an optional feature in
> >2.1, default off, selectable on a per-module basis using a mechanism
> >that's slightly hackish but is guaranteed to be safe.  (See below.)
> >
> >At the same time, we'll augment the compiler to detect all situations
> >that will break when nested scopes are introduced in the future, and
> >issue warnings for those situations.  The idea here is that warnings
> >don't break code, but encourage folks to fix their code so we can
> >introduce nested scopes in 2.2.  Given our current pace of releases
> >that should be about 6 months warning.
> 
> As some other people have pointed out, six months is actually a rather
> short cycle when it comes to delivering enterprise applications across
> hundreds or thousands of machines.  Notice how many people have said
> they haven't upgraded from 1.5.2 yet!  Contrast that with the quickness
> of the 1.5.1 to 1.5.2 upgrade.

Clearly, we're taking this into account.  If we believed you all
upgraded the day we announced a new release, we'd be even more
conservative with adding new features (at least features introducing
incompatibilities).

> I believe that "from __future__" is a good idea, but it is at best a
> bandage over the feature/bug fix tension.  I think that the real issue
> is that in the world of core Python development, release N is always a
> future release, never the current release; as soon as release N goes out
> the door into production, it immediately becomes release N-1 and forever
> dead to development
> 
> Rather than change that mindset directly, I propose that we move to a
> forked model of development.  During the development cycle for any given
> release, release (N-1).1 is also a live target -- but strictly for bug
> fixes.  I suggest that shortly after the release for Na1, there should
> also be a release for (N-1).1b1; shortly after the release of Nb1, there
> would be (N-1).1b2.  And (N-1).1 would be released shortly after N.

Your math at first confused the hell out of me, but I see what you
mean.  You want us to spend time on 2.0.1 which should be a bugfix
release for 2.0, while at the same time working on 2.1 which is a new
feature release.

Guess what -- I am secretly (together with the PSU) planning a 2.0.1
release.  I'm waiting however for obtaining the ownership rights to
the 2.0 release, so we can fix the GPL incompatibility issue in the
license at the same time.  (See the 1.6.1 release.)  I promise that
2.0.1, unlike 1.6.1, will contain more than a token set of real
bugfixes.  Hey, we already have a branch in the CVS tree for 2.0.1
development!  (Tagged "release20-maint".)

We could use some checkins on that branch though.

> This means that each feature-based release gets one-and-only-one pure
> bugfix release.  I think this will do much to promote the idea of Python
> as a stable platform for application development.

Anything we can do to please those republicans! :-)

> There are a number of ways I can see this working, including setting up
> a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
> But I don't think this will work at all unless the PythonLabs team is at
> least willing to "bless" the bugfix release.  Uncle Timmy has been known
> to make snarky comments about forever maintaining 1.5.2; I think this is
> a usable compromise that will take relatively little effort to keep
> going once it's set up.

With the CVS branch it's *trivial* to keep it going.  We should have
learned from the Tcl folks, they've had 8.NpM releases for a while.

> I think one key advantage of this approach is that a lot more people
> will be willing to try out a beta of a strict bugfix release, so the
> release N bugfixes will get more testing than they otherwise would.

Wait a minute!  Now you're making it too complicated.  Betas of bugfix
releases?  That seems to defeat the purpose.  What kind of
beta-testing does a pure bugfix release need?  Presumably each
individual bugfix applied has already been tested before it is checked
in!  Or are you thinking of adding small new features to a "bugfix"
release?  That ought to be a no-no according to your own philosophy!

> If there's interest in this idea, I'll write it up as a formal PEP.

Please do.

> It's too late for my proposed model to work during the 2.1 release
> cycle, but I think it would be an awfully nice gesture to the community
> to take a month off after 2.1 to create 2.0.1, before going on to 2.2.

It's not too late, as I mentioned.  We'll also do this for 2.1.

> BTW, you should probably blame Fredrik for this idea.  ;-)  If he had
> skipped providing 1.5.2 and 2.0 versions of sre, I probably wouldn't
> have considered this a workable idea.  I was just thinking that it was
> too bad there wasn't a packaged version of 2.0 containing the new sre,
> and that snowballed into this.

So the new (2.1) sre code should be merged back into 2.0.1, right?
Fredrik, go ahead!  We'll start planning for the 2.0.1 release right
after we're back from the conference.

BTW, See you at the conference!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fdrake@acm.org  Sat Mar  3 19:30:13 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:30:13 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
 <mailman.982897324.9109.python-list@python.org>
 <200103031821.NAA24060@panix3.panix.com>
 <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
Message-ID: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > I wonder, does that extend to new library modules?  Is there also
 > resistance against the growth there?  I don't think so -- if anything,
 > people are clamoring for more stuff to become standard (while at the

  There is still the issue of name clashes; introducing a new module
in the top-level namespace introduces a potential conflict with
someone's application-specific modules.  This is a good reason for us
to get the standard library packagized sooner rather than later
(although this would have to be part of a "feature" release;).

 > Wait a minute!  Now you're making it too complicated.  Betas of bugfix
 > releases?  That seems to defeat the purpose.  What kind of

  Betas of the bugfix releases are important -- portability testing is
fairly difficult to do when all we have are Windows and Linux/x86
boxes.  There's definately a need for at least one beta.  We probably
don't need to lengthy, multi-phase alpha/alpha/beta/beta/candidate
cycle we're using for feature releases now.

 > It's not too late, as I mentioned.  We'll also do this for 2.1.

  Managing the bugfix releases would also be an excellent task for
someone who's expecting to use the bugfix releases more than the
feature releases -- the mentality has to be right for the task.  I
know I'm much more of a "features" person, and would have a hard time
not crossing the line if it were up to me what went into a bugfix
release.

 > BTW, See you at the conference!

  If we don't get snowed in!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From guido@digicool.com  Sat Mar  3 19:44:19 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 14:44:19 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 14:30:13 EST."
 <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
 <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
Message-ID: <200103031944.OAA21835@cj20424-a.reston1.va.home.com>

> Guido van Rossum writes:
>  > I wonder, does that extend to new library modules?  Is there also
>  > resistance against the growth there?  I don't think so -- if anything,
>  > people are clamoring for more stuff to become standard (while at the
> 
>   There is still the issue of name clashes; introducing a new module
> in the top-level namespace introduces a potential conflict with
> someone's application-specific modules.  This is a good reason for us
> to get the standard library packagized sooner rather than later
> (although this would have to be part of a "feature" release;).

But of course the library repackaging in itself would cause enormous
outcries, because in a very real sense it *does* break code.

>  > Wait a minute!  Now you're making it too complicated.  Betas of bugfix
>  > releases?  That seems to defeat the purpose.  What kind of
> 
>   Betas of the bugfix releases are important -- portability testing is
> fairly difficult to do when all we have are Windows and Linux/x86
> boxes.  There's definately a need for at least one beta.  We probably
> don't need to lengthy, multi-phase alpha/alpha/beta/beta/candidate
> cycle we're using for feature releases now.

OK, you can have *one* beta.  That's it.

>  > It's not too late, as I mentioned.  We'll also do this for 2.1.
> 
>   Managing the bugfix releases would also be an excellent task for
> someone who's expecting to use the bugfix releases more than the
> feature releases -- the mentality has to be right for the task.  I
> know I'm much more of a "features" person, and would have a hard time
> not crossing the line if it were up to me what went into a bugfix
> release.

That's how all of us here at PythonLabs are feeling...  I feel a
community task coming.  I'll bless a 2.0.1 release and the general
idea of bugfix releases, but doing the grunt work won't be a
PythonLabs task.  Someone else inside or outside Python-dev will have
to do some work.  Aahz?

>  > BTW, See you at the conference!
> 
>   If we don't get snowed in!

Good point.  East coasters flying to LA on Monday, watch your weather
forecast!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fdrake@cj42289-a.reston1.va.home.com  Sat Mar  3 19:47:49 2001
From: fdrake@cj42289-a.reston1.va.home.com (Fred Drake)
Date: Sat,  3 Mar 2001 14:47:49 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010303194749.629AC28803@cj42289-a.reston1.va.home.com>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


Additional information on using non-Microsoft compilers on Windows when
using the Distutils, contributed by Rene Liebscher.



From tim.one@home.com  Sat Mar  3 19:55:09 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 3 Mar 2001 14:55:09 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>

[Fred L. Drake, Jr.]
> ...
>   Managing the bugfix releases would also be an excellent task for
> someone who's expecting to use the bugfix releases more than the
> feature releases -- the mentality has to be right for the task.  I
> know I'm much more of a "features" person, and would have a hard time
> not crossing the line if it were up to me what went into a bugfix
> release.

Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
nobody responded.  Past is prelude ...

everyone-is-generous-with-everyone-else's-time-ly y'rs  - tim



From fdrake@acm.org  Sat Mar  3 19:53:45 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:53:45 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
References: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
 <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
Message-ID: <15009.19401.787058.744462@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
 > serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
 > Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
 > nobody responded.  Past is prelude ...

  And as long as that continues, I'd have to conclude that the user
base is largely happy with the way we've done things.  *If* users want
bugfix releases badly enough, someone will do them.  If not, hey,
features can be useful!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From fdrake@acm.org  Sat Mar  3 19:54:31 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:54:31 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031944.OAA21835@cj20424-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
 <mailman.982897324.9109.python-list@python.org>
 <200103031821.NAA24060@panix3.panix.com>
 <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
 <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
 <200103031944.OAA21835@cj20424-a.reston1.va.home.com>
Message-ID: <15009.19447.154958.449303@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > But of course the library repackaging in itself would cause enormous
 > outcries, because in a very real sense it *does* break code.

  That's why it has to be a feature release.  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From guido@digicool.com  Sat Mar  3 20:07:09 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 15:07:09 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 14:55:09 EST."
 <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
Message-ID: <200103032007.PAA21925@cj20424-a.reston1.va.home.com>

> [Fred L. Drake, Jr.]
> > ...
> >   Managing the bugfix releases would also be an excellent task for
> > someone who's expecting to use the bugfix releases more than the
> > feature releases -- the mentality has to be right for the task.  I
> > know I'm much more of a "features" person, and would have a hard time
> > not crossing the line if it were up to me what went into a bugfix
> > release.

[Uncle Timmy]
> Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
> serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
> Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
> nobody responded.  Past is prelude ...
> 
> everyone-is-generous-with-everyone-else's-time-ly y'rs  - tim

I understand the warning.  How about the following (and then I really
have to go write my keynote speech :-).  PythonLabs will make sure
that it will happen.  But how much stuff goes into the bugfix release
is up to the community.

We'll give SourceForge commit privileges to individuals who want to do
serious work on the bugfix branch -- but before you get commit
privileges, you must first show that you know what you are doing by
submitting useful patches through the SourceForge patch mananger.

Since a lot of the 2.0.1 effort will be deciding which code from 2.1
to merge back into 2.0.1, it may not make sense to upload context
diffs to SourceForge.  Instead, we'll accept reasoned instructions for
specific patches to be merged back.  Instructions like "cvs update
-j<rev1> -j<rev2> <file>" are very helpful; please also explain why!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From aahz@panix.com  Sat Mar  3 21:55:28 2001
From: aahz@panix.com (aahz@panix.com)
Date: Sat, 3 Mar 2001 16:55:28 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <mailman.983646726.27322.python-list@python.org>
Message-ID: <200103032155.QAA05049@panix3.panix.com>

In article <mailman.983646726.27322.python-list@python.org>,
Guido van Rossum  <guido@digicool.com> wrote:
>Aahz writes:
>>
>> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
>> at the time, but I believe that much of the yelling over "print >>" was
>> less over the specific design but because it came so close to the
>> release of 2.0 that there wasn't *time* to sit down and talk things
>> over rationally.
>
>In my eyes the issues are somewhat different: "print >>" couldn't
>possibly break existing code; nested scopes clearly do, and that's why
>we decided to use the __future__ statement.
>
>But I understand that you're saying that the community has grown so
>conservative that it can't stand new features even if they *are* fully
>backwards compatible.

Then you understand incorrectly.  There's a reason why I emphasized
"*time*" up above.  It takes time to grok a new feature, time to think
about whether and how we should argue in favor or against it, time to
write comprehensible and useful arguments.  In hindsight, I think you
probably did make the right design decision on "print >>", no matter how
ugly I think it looks.  But I still think you made absolutely the wrong
decision to include it in 2.0.

>So that relegates us at PythonLabs to a number of things: coding new
>modules (boring), or trying to improve performance of the virtual
>machine (equally boring, and difficult to boot), or fixing bugs (did I
>mention boring? :-).
>
>So what can we do for fun?  (Besides redesigning Zope, which is lots
>of fun, but runs into the same issues.)

Write new versions of Python.  You've come up with a specific protocol
in a later post that I think I approve of; I was trying to suggest a
balance between lots of grunt work maintenance and what I see as
perpetual language instability in the absence of any bug fix releases.

>Your math at first confused the hell out of me, but I see what you
>mean.  You want us to spend time on 2.0.1 which should be a bugfix
>release for 2.0, while at the same time working on 2.1 which is a new
>feature release.

Yup.  The idea is that because it's always an N and N-1 pair, the base
code is the same for both and applying patches to both should be
(relatively speaking) a small amount of extra work.  Most of the work
lies in deciding *which* patches should go into N-1.

>Guess what -- I am secretly (together with the PSU) planning a 2.0.1
>release.  I'm waiting however for obtaining the ownership rights to
>the 2.0 release, so we can fix the GPL incompatibility issue in the
>license at the same time.  (See the 1.6.1 release.)  I promise that
>2.0.1, unlike 1.6.1, will contain more than a token set of real
>bugfixes.  Hey, we already have a branch in the CVS tree for 2.0.1
>development!  (Tagged "release20-maint".)

Yay!  (Sorry, I'm not much of a CVS person; the one time I tried using
it, I couldn't even figure out where to download the software.  Call me
stupid.)

>We could use some checkins on that branch though.

Fair enough.

>> This means that each feature-based release gets one-and-only-one pure
>> bugfix release.  I think this will do much to promote the idea of Python
>> as a stable platform for application development.
>
>Anything we can do to please those republicans! :-)

<grin>

>> There are a number of ways I can see this working, including setting up
>> a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
>> But I don't think this will work at all unless the PythonLabs team is at
>> least willing to "bless" the bugfix release.  Uncle Timmy has been known
>> to make snarky comments about forever maintaining 1.5.2; I think this is
>> a usable compromise that will take relatively little effort to keep
>> going once it's set up.
>
>With the CVS branch it's *trivial* to keep it going.  We should have
>learned from the Tcl folks, they've had 8.NpM releases for a while.

I'm suggesting having one official PythonLabs-created bug fix release as
being a small incremental effort over the work in the feature release.
But if you want it to be an entirely community-driven effort, I can't
argue with that.

My one central point is that I think this will fail if PythonLabs
doesn't agree to formally certify each release.

>> I think one key advantage of this approach is that a lot more people
>> will be willing to try out a beta of a strict bugfix release, so the
>> release N bugfixes will get more testing than they otherwise would.
>
>Wait a minute!  Now you're making it too complicated.  Betas of bugfix
>releases?  That seems to defeat the purpose.  What kind of
>beta-testing does a pure bugfix release need?  Presumably each
>individual bugfix applied has already been tested before it is checked
>in!  

"The difference between theory and practice is that in theory, there is
no difference, but in practice, there is."

I've seen too many cases where a bugfix introduced new bugs somewhere
else.  Even if "tested", there might be a border case where an
unexpected result shows up.  Finally, there's the issue of system
testing, making sure the entire package of bugfixes works correctly.

The main reason I suggested two betas was to "lockstep" the bugfix
release to the next version's feature release.

>Or are you thinking of adding small new features to a "bugfix"
>release?  That ought to be a no-no according to your own philosophy!

That's correct.  One problem, though, is that sometimes it's a little
difficult to agree on whether a particular piece of code is a feature or
a bugfix.  For example, the recent work to resolve case-sensitive
imports could be argued either way -- and if we want Python 2.0 to run
on OS X, we'd better decide that it's a bugfix.  ;-)

>> If there's interest in this idea, I'll write it up as a formal PEP.
>
>Please do.

Okay, I'll do it after the conference.  I've e-mailed Barry to ask for a
PEP number.
-- 
                      --- Aahz (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be
-- 
                      --- Aahz (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be


From guido@digicool.com  Sat Mar  3 22:18:45 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 17:18:45 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 16:55:28 EST."
 <200103032155.QAA05049@panix3.panix.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <mailman.983646726.27322.python-list@python.org>
 <200103032155.QAA05049@panix3.panix.com>
Message-ID: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>

[Aahz]
> >> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
> >> at the time, but I believe that much of the yelling over "print >>" was
> >> less over the specific design but because it came so close to the
> >> release of 2.0 that there wasn't *time* to sit down and talk things
> >> over rationally.

[Guido]
> >In my eyes the issues are somewhat different: "print >>" couldn't
> >possibly break existing code; nested scopes clearly do, and that's why
> >we decided to use the __future__ statement.
> >
> >But I understand that you're saying that the community has grown so
> >conservative that it can't stand new features even if they *are* fully
> >backwards compatible.

[Aahz]
> Then you understand incorrectly.  There's a reason why I emphasized
> "*time*" up above.  It takes time to grok a new feature, time to think
> about whether and how we should argue in favor or against it, time to
> write comprehensible and useful arguments.  In hindsight, I think you
> probably did make the right design decision on "print >>", no matter how
> ugly I think it looks.  But I still think you made absolutely the wrong
> decision to include it in 2.0.

Then I respectfully disagree.  We took plenty of time to discuss
"print >>" amongst ourselves.  I don't see the point of letting the
whole community argue about every little new idea before we include it
in a release.  We want good technical feedback, of course.  But if it
takes time to get emotionally used to an idea, you can use your own
time.

> >With the CVS branch it's *trivial* to keep it going.  We should have
> >learned from the Tcl folks, they've had 8.NpM releases for a while.
> 
> I'm suggesting having one official PythonLabs-created bug fix release as
> being a small incremental effort over the work in the feature release.
> But if you want it to be an entirely community-driven effort, I can't
> argue with that.

We will surely put in an effort, but we're limited in what we can do,
so I'm inviting the community to pitch in.  Even just a wish-list of
fixes that are present in 2.1 that should be merged back into 2.0.1
would help!

> My one central point is that I think this will fail if PythonLabs
> doesn't agree to formally certify each release.

Of course we will do that -- I already said so.  And not just for
2.0.1 -- for all bugfix releases, as long as they make sense.

> I've seen too many cases where a bugfix introduced new bugs somewhere
> else.  Even if "tested", there might be a border case where an
> unexpected result shows up.  Finally, there's the issue of system
> testing, making sure the entire package of bugfixes works correctly.

I hope that the experience with 2.1 will validate most bugfixes that
go into 2.0.1.

> The main reason I suggested two betas was to "lockstep" the bugfix
> release to the next version's feature release.

Unclear what you want there.  Why tie the two together?  How?

> >Or are you thinking of adding small new features to a "bugfix"
> >release?  That ought to be a no-no according to your own philosophy!
> 
> That's correct.  One problem, though, is that sometimes it's a little
> difficult to agree on whether a particular piece of code is a feature or
> a bugfix.  For example, the recent work to resolve case-sensitive
> imports could be argued either way -- and if we want Python 2.0 to run
> on OS X, we'd better decide that it's a bugfix.  ;-)

But the Windows change is clearly a feature, so that can't be added to
2.0.1.  We'll have to discuss this particular one.  If 2.0 doesn't
work on MacOS X now, why couldn't MacOS X users install 2.1?  They
can't have working code that breaks, can they?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From tim.one@home.com  Sun Mar  4 05:18:05 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 4 Mar 2001 00:18:05 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGJDAA.tim.one@home.com>

FYI, in reviewing Misc/HISTORY, it appears that the last Python release
*called* a "pure bugfix release" was in November of 1994 (1.1.1) -- although
"a few new features were added to tkinter" anyway.

fine-by-me-if-we-just-keep-up-the-good-work<wink>-ly y'rs  - tim



From tim.one@home.com  Sun Mar  4 06:00:44 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 4 Mar 2001 01:00:44 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMHJDAA.tim.one@home.com>

[Aahz]
> ...
> For example, the recent work to resolve case-sensitive imports could
> be argued either way -- and if we want Python 2.0 to run on OS X,
> we'd better decide that it's a bugfix.  ;-)

[Guido]
> But the Windows change is clearly a feature,

Yes.

> so that can't be added to 2.0.1.

That's what Aahz is debating.

> We'll have to discuss this particular one.  If 2.0 doesn't
> work on MacOS X now, why couldn't MacOS X users install 2.1?  They
> can't have working code that breaks, can they?

You're a Giant Corporation that ships a multi-platform product, including
Python 2.0.  Since your IT dept is frightened of its own shadow, they won't
move to 2.1.  Since there is no bound to your greed, you figure that even if
there are only a dozen MacOS X users in the world, you could make 10 bucks
off of them if only you can talk PythonLabs into treating the lack of 2.0
MacOS X support as "a bug", getting PythonLabs to backstitch the port into a
2.0 follow-on (*calling* it 2.0.x serves to pacify your IT paranoids).  No
cost to you, and 10 extra dollars in your pocket.  Everyone wins <wink>.

There *are* some companies so unreasonable in their approach.  Replace "a
dozen" and "10 bucks" by much higher numbers, and the number of companies
mushrooms accordingly.

If we put out a release that actually did nothing except fix legitimate bugs,
PythonLabs may have enough fingers to count the number of downloads.  For
example, keen as *I* was to see a bugfix release for the infamous 1.5.2
"invalid tstate" bug, I didn't expect anyone would pick it up except for Mark
Hammond and the other guy who bumped into it (it was very important to them).
Other people simply won't pick it up unless and until they bump into the bug
it fixes, and due to the same "if it's not obviously broken, *any* change is
dangerous" fear that motivates everyone clinging to old releases by choice.

Curiously, I eventually got my Win95 box into a state where it routinely ran
for a solid week without crashing (the MTBF at the end was about 100x higher
than when I got the machine).  I didn't do that by avoiding MS updates, but
by installing *every* update they offered ASAP, even for subsystems I had no
intention of ever using.  That's the contrarian approach to keeping your
system maximally stable, relying on the observation that the code that works
best is extremely likely to be the code that the developers use themselves.

If someone thinks there's a market for Python bugfix releases that's worth
more than it costs, great -- they can get filthy rich off my appalling lack
of vision <wink>.

"worth-more-than-it-costs"-is-key-ly y'rs  - tim



From tim.one@home.com  Sun Mar  4 06:50:58 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 4 Mar 2001 01:50:58 -0500
Subject: [Python-Dev] a small C style question
In-Reply-To: <05f101c0a2f3$cf4bae10$e46940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMLJDAA.tim.one@home.com>

[Fredrik Lundh]
> DEC's OpenVMS compiler are a bit pickier than most other compilers.
> among other things, it correctly notices that the "code" variable in
> this statement is an unsigned variable:
>
>     UNICODEDATA:
>
>         if (code < 0 || code >= 65536)
>     ........^
>     %CC-I-QUESTCOMPARE, In this statement, the unsigned
>     expression "code" is being compared with a relational
>     operator to a constant whose value is not greater than
>     zero.  This might not be what you intended.
>     at line number 285 in file UNICODEDATA.C
>
> the easiest solution would of course be to remove the "code < 0"
> part, but code is a Py_UCS4 variable.  what if someone some day
> changes Py_UCS4 to a 64-bit signed integer, for example?
>
> what's the preferred style?
>
> 1) leave it as is, and let OpenVMS folks live with the
> compiler complaint
>
> 2) get rid of "code < 0" and hope that nobody messes
> up the Py_UCS4 declaration
>
> 3) cast "code" to a known unsigned type, e.g:
>
>         if ((unsigned int) code >= 65536)

#2.  The comment at the declaration of Py_UCS4 insists that an unsigned type
be used:

/*
 * Use this typedef when you need to represent a UTF-16 surrogate pair
 * as single unsigned integer.
             ^^^^^^^^
 */
#if SIZEOF_INT >= 4
typedef unsigned int Py_UCS4;
#elif SIZEOF_LONG >= 4
typedef unsigned long Py_UCS4;
#endif

If someone needs to boost that to a 64-bit int someday (hard to imagine ...),
they can boost it to an unsigned 64-bit int just as well.

If you really need to cater to impossibilities <0.5 wink>, #define a
Py_UCS4_IN_RANGE macro next to the typedef, and use the macro instead.



From gmcm@hypernet.com  Sun Mar  4 15:54:50 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Sun, 4 Mar 2001 10:54:50 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMHJDAA.tim.one@home.com>
References: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
Message-ID: <3AA21EFA.30660.4C134459@localhost>

[Tim justifies one-release-back mentality]
> You're a Giant Corporation that ships a multi-platform product,
> including Python 2.0.  Since your IT dept is frightened of its
> own shadow, they won't move to 2.1.  Since there is no bound to
> your greed, you figure that even if there are only a dozen MacOS
> X users in the world, you could make 10 bucks off of them if only
> you can talk PythonLabs into treating the lack of 2.0 MacOS X
> support as "a bug", getting PythonLabs to backstitch the port
> into a 2.0 follow-on (*calling* it 2.0.x serves to pacify your IT
> paranoids).  No cost to you, and 10 extra dollars in your pocket.
>  Everyone wins <wink>.

There is a curious psychology involved. I've noticed that a 
significant number of people (roughly 30%) always download 
an older release.

Example: Last week I announced a new release (j) of Installer. 
70% of the downloads were for that release.

There is only one previous Python 2 version of Installer 
available, but of people downloading a Python 2 version, 17% 
chose the older (I always send people to the html page, and 
none of the referrers shows a direct link - so this was a 
concious decision).

Of people downloading a 1.5.2 release (15% of total), 69% 
chose the latest, and 31% chose an older. This is the stable 
pattern (the fact that 83% of Python 2 users chose the latest 
is skewed by the fact that this was the first week it was 
available).

Since I yank a release if it turns out to introduce bugs, these 
people are not downloading older because they've heard it 
"works better". The interface has hardly changed in the entire 
span of available releases, so these are not people avoiding 
learning something new.

These are people who are simply highly resistent to anything 
new, with no inclination to test their assumptions against 
reality.

As Guido said, Republicans :-). 


- Gordon


From thomas@xs4all.net  Mon Mar  5 00:16:55 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 5 Mar 2001 01:16:55 +0100
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Mar 03, 2001 at 02:10:35PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
Message-ID: <20010305011655.V9678@xs4all.nl>

On Sat, Mar 03, 2001 at 02:10:35PM -0500, Guido van Rossum wrote:

> But I understand that you're saying that the community has grown so
> conservative that it can't stand new features even if they *are* fully
> backwards compatible.

There is an added dimension, especially with Python. Bugs in the new
features. If it entails changes in the compiler or VM (like import-as, which
changed the meaning of FROM_IMPORT and added a IMPORT_STAR opcode) or if
modules get augmented to use the new features, these changes can introduce
bugs into existing code that doesn't even use the new features itself.

> I wonder, does that extend to new library modules?  Is there also
> resistance against the growth there?  I don't think so -- if anything,
> people are clamoring for more stuff to become standard (while at the
> same time I feel some pressure to cut dead wood, like the old SGI
> multimedia modules).

No (yes), bugfix releases should fix bugs, not add features (nor remove
them). Modules in the std lib are just features.

> So that relegates us at PythonLabs to a number of things: coding new
> modules (boring), or trying to improve performance of the virtual
> machine (equally boring, and difficult to boot), or fixing bugs (did I
> mention boring? :-).

How can you say this ? Okay, so *fixing* bugs isn't terribly exciting, but
hunting them down is one of the best sports around. Same for optimizations:
rewriting the code might be boring (though if you are a fast typist, it
usually doesn't take long enough to get boring :) but thinking them up is
the fun part. 

But who said PythonLabs had to do all the work ? You guys didn't do all the
work in 2.0->2.1, did you ? Okay, so most of the major features are written
by PythonLabs, and most of the decisions are made there, but there's no real
reason for it. Consider the Linux kernel: Linus Torvalds releases the
kernels in the devel 'tree' and usually the first few kernels in the
'stable' tree, and then Alan Cox takes over the stable tree and continues
it. (Note that this analogy isn't quite correct: the stable tree often
introduces new features, new drivers, etc, but avoids real incompatibilites
and usually doesn't require extra upgrades of tools and such.)

I hope you don't think any less of me if I volunteer *again* :-) but I'm
perfectly willing to maintain the bugfix release(s). I also don't think we
should necessarily stay at a single bugfix release. Whether or not a 'beta'
for the bugfix release is necessary, I'm not sure. I don't think so, at
least not if you release multiple bugfix releases. 

Holiday-Greetings-from-Long-Beach-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jeremy@alum.mit.edu  Sat Mar  3 23:32:32 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Sat, 3 Mar 2001 18:32:32 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <20010305011655.V9678@xs4all.nl>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
 <mailman.982897324.9109.python-list@python.org>
 <200103031821.NAA24060@panix3.panix.com>
 <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
 <20010305011655.V9678@xs4all.nl>
Message-ID: <15009.32528.29406.232901@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

  [GvR:]
  >> So that relegates us at PythonLabs to a number of things: coding
  >> new modules (boring), or trying to improve performance of the
  >> virtual machine (equally boring, and difficult to boot), or
  >> fixing bugs (did I mention boring? :-).

  TW> How can you say this ? Okay, so *fixing* bugs isn't terribly
  TW> exciting, but hunting them down is one of the best sports
  TW> around. Same for optimizations: rewriting the code might be
  TW> boring (though if you are a fast typist, it usually doesn't take
  TW> long enough to get boring :) but thinking them up is the fun
  TW> part.

  TW> But who said PythonLabs had to do all the work ? You guys didn't
  TW> do all the work in 2.0->2.1, did you ? Okay, so most of the
  TW> major features are written by PythonLabs, and most of the
  TW> decisions are made there, but there's no real reason for
  TW> it.

Most of the work I did for Python 2.0 was fixing bugs.  It was a lot
of fairly tedious but necessary work.  I have always imagined that
this was work that most people wouldn't do unless they were paid to do
it.  (python-dev seems to have a fair number of exceptions, though.)

Working on major new features has a lot more flash, so I imagine that
volunteers would be more inclined to help.  Neil's work on GC or yours
on augmented assignment are examples.

There's nothing that says we have to do all the work.  In fact, I
imagine we'll continue to collectively spend a lot of time on
maintenance issues.  We get paid to do it, and we get to hack on Zope
and ZODB the rest of the time, which is also a lot of fun.

Jeremy


From jack@oratrix.nl  Mon Mar  5 10:47:17 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Mon, 05 Mar 2001 11:47:17 +0100
Subject: [Python-Dev] os module UserDict
Message-ID: <20010305104717.A5104373C95@snelboot.oratrix.nl>

Importing os has started failing on the Mac since the riscos mods are in 
there, it tries to use UserDict without having imported it first.

I think that the problem is that the whole _Environ stuff should be inside the 
else part of the try/except, but I'm not sure I fully understand what goes on. 
Could whoever did these mods have a look?

Also, it seems that the whole if name != "riscos" is a bit of a hack...
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++




From phil@river-bank.demon.co.uk  Mon Mar  5 16:15:13 2001
From: phil@river-bank.demon.co.uk (Phil Thompson)
Date: Mon, 05 Mar 2001 16:15:13 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
Message-ID: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>

This is a multi-part message in MIME format.
--------------8B3DF66E5341F2A79134074D
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

Any chance of the attached small patch be applied to enable weak
references to functions?

It's particularly useful for lambda functions and closes the "very last
loophole where a programmer can cause a PyQt script to seg fault" :)

Phil
--------------8B3DF66E5341F2A79134074D
Content-Type: text/plain; charset=us-ascii;
 name="wrfunctions.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: inline;
 filename="wrfunctions.patch"

diff -ruN Python-2.1b1.orig/Include/funcobject.h Python-2.1b1/Include/funcobject.h
--- Python-2.1b1.orig/Include/funcobject.h	Thu Jan 25 20:06:58 2001
+++ Python-2.1b1/Include/funcobject.h	Mon Mar  5 13:00:58 2001
@@ -16,6 +16,7 @@
     PyObject *func_doc;
     PyObject *func_name;
     PyObject *func_dict;
+    PyObject *func_weakreflist;
 } PyFunctionObject;
 
 extern DL_IMPORT(PyTypeObject) PyFunction_Type;
diff -ruN Python-2.1b1.orig/Objects/funcobject.c Python-2.1b1/Objects/funcobject.c
--- Python-2.1b1.orig/Objects/funcobject.c	Thu Mar  1 06:06:37 2001
+++ Python-2.1b1/Objects/funcobject.c	Mon Mar  5 13:39:37 2001
@@ -245,6 +245,8 @@
 static void
 func_dealloc(PyFunctionObject *op)
 {
+	PyObject_ClearWeakRefs((PyObject *) op);
+
 	PyObject_GC_Fini(op);
 	Py_DECREF(op->func_code);
 	Py_DECREF(op->func_globals);
@@ -336,4 +338,7 @@
 	Py_TPFLAGS_DEFAULT | Py_TPFLAGS_GC, /*tp_flags*/
 	0,		/* tp_doc */
 	(traverseproc)func_traverse,	/* tp_traverse */
+	0,		/* tp_clear */
+	0,		/* tp_richcompare */
+	offsetof(PyFunctionObject, func_weakreflist)	/* tp_weaklistoffset */
 };

--------------8B3DF66E5341F2A79134074D--



From thomas@xs4all.net  Mon Mar  5 23:28:50 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 6 Mar 2001 00:28:50 +0100
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>; from phil@river-bank.demon.co.uk on Mon, Mar 05, 2001 at 04:15:13PM +0000
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
Message-ID: <20010306002850.B9678@xs4all.nl>

On Mon, Mar 05, 2001 at 04:15:13PM +0000, Phil Thompson wrote:

> Any chance of the attached small patch be applied to enable weak
> references to functions?

It's probably best to upload it to SourceForge, even though it seems pretty
broken right now. Especially during the Python conference, posts are
terribly likely to fall into oblivion.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From skip@mojam.com (Skip Montanaro)  Tue Mar  6 00:33:05 2001
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:33:05 -0600 (CST)
Subject: [Python-Dev] Who wants this GCC/Solaris bug report?
Message-ID: <15012.12353.311124.819970@beluga.mojam.com>

I was assigned the following bug report:

   http://sourceforge.net/tracker/?func=detail&aid=232787&group_id=5470&atid=105470

I made a pass through the code in question, made one change to posixmodule.c
that I thought appropriate (should squelch one warning) and some comments
about the other warnings.  I'm unable to actually test any changes since I
don't run Solaris, so I don't feel comfortable doing anything more.  Can
someone else take this one over?  In theory, my comments should help you
zero in on a fix faster (famous last words).

Skip



From skip@mojam.com (Skip Montanaro)  Tue Mar  6 00:41:50 2001
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:41:50 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
References: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
 <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
Message-ID: <15012.12878.853762.563753@beluga.mojam.com>

    Tim> Note there was never a bugfix release for 1.5.2, despite that 1.5.2
    Tim> had some serious bugs, and that 1.5.2 was current for an
    Tim> unprecedentedly long time.  Guido put out a call for volunteers to
    Tim> produce a 1.5.2 bugfix release, but nobody responded.  Past is
    Tim> prelude ...

Yes, but 1.5.2 source was managed differently.  It was released while the
source was still "captive" to CNRI and the conversion to Sourceforge was
relatively speaking right before the 2.0 release and had the added
complication that it more-or-less coincided with the formation of
PythonLabs.  With the source tree where someone can easily branch it, I
think it's now feasible to create a bug fix branch and have someone
volunteer to manage additions to it (that is, be the filter that decides if
a code change is a bug fix or a new feature).

Skip


From skip@mojam.com (Skip Montanaro)  Tue Mar  6 00:48:33 2001
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:48:33 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103032155.QAA05049@panix3.panix.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
 <mailman.982897324.9109.python-list@python.org>
 <200103031821.NAA24060@panix3.panix.com>
 <mailman.983646726.27322.python-list@python.org>
 <200103032155.QAA05049@panix3.panix.com>
Message-ID: <15012.13281.629270.275993@beluga.mojam.com>

    aahz> Yup.  The idea is that because it's always an N and N-1 pair, the
    aahz> base code is the same for both and applying patches to both should
    aahz> be (relatively speaking) a small amount of extra work.  Most of
    aahz> the work lies in deciding *which* patches should go into N-1.

The only significant problem I see is making sure submitted patches contain
just bug fixes or new features and not a mixture of the two.

    aahz> The main reason I suggested two betas was to "lockstep" the bugfix
    aahz> release to the next version's feature release.

I don't see any real reason to sync them.  There's no particular reason I
can think of why you couldn't have 2.1.1, 2.1.2 and 2.1.3 releases before
2.2.0 is released and not have any bugfix release coincident with 2.2.0.
Presumably, any bug fixes between the release of 2.1.3 and 2.2.0 would also
appear in the feature branch.  As long as there was someone willing to
manage a particular bug fix branch, such a branch could continue for a
relatively long ways, long past the next feature release.

Skip



From skip@mojam.com (Skip Montanaro)  Tue Mar  6 00:53:38 2001
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:53:38 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <3AA21EFA.30660.4C134459@localhost>
References: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
 <3AA21EFA.30660.4C134459@localhost>
Message-ID: <15012.13586.201583.620776@beluga.mojam.com>

    Gordon> There is a curious psychology involved. I've noticed that a
    Gordon> significant number of people (roughly 30%) always download an
    Gordon> older release.

    Gordon> Example: Last week I announced a new release (j) of Installer.
    Gordon> 70% of the downloads were for that release.

    ...

    Gordon> Of people downloading a 1.5.2 release (15% of total), 69% 
    Gordon> chose the latest, and 31% chose an older. This is the stable 
    Gordon> pattern (the fact that 83% of Python 2 users chose the latest 
    Gordon> is skewed by the fact that this was the first week it was 
    Gordon> available).

Check your web server's referral logs.  I suspect a non-trivial fraction of
those 30% were coming via offsite links such as search engine referrals and
weren't even aware a new installer was available.

Skip


From gmcm@hypernet.com  Tue Mar  6 02:09:38 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Mon, 5 Mar 2001 21:09:38 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15012.13586.201583.620776@beluga.mojam.com>
References: <3AA21EFA.30660.4C134459@localhost>
Message-ID: <3AA40092.13561.536C8052@localhost>

>     Gordon> Of people downloading a 1.5.2 release (15% of total),
>     69% Gordon> chose the latest, and 31% chose an older. This is
>     the stable Gordon> pattern (the fact that 83% of Python 2
>     users chose the latest Gordon> is skewed by the fact that
>     this was the first week it was Gordon> available).
[Skip] 
> Check your web server's referral logs.  I suspect a non-trivial
> fraction of those 30% were coming via offsite links such as
> search engine referrals and weren't even aware a new installer
> was available.

That's the whole point - these stats are from the referrals. My 
download directory is not indexed or browsable. I only 
announce the page with the download links on it. And sure 
enough, all downloads come from there.

- Gordon


From fdrake@acm.org  Mon Mar  5 16:15:27 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Mon, 5 Mar 2001 11:15:27 -0500 (EST)
Subject: [Python-Dev] XML runtime errors?
In-Reply-To: <01f701c01d05$0aa98e20$766940d5@hagrid>
References: <009601c01cf1$467458e0$766940d5@hagrid>
 <200009122155.QAA01452@cj20424-a.reston1.va.home.com>
 <01f701c01d05$0aa98e20$766940d5@hagrid>
Message-ID: <15011.48031.772007.248246@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > how about adding:
 > 
 >     class XMLError(RuntimeError):
 >         pass

  Looks like someone already added Error for this.

 > > > what's wrong with "SyntaxError"?
 > > 
 > > That would be the wrong exception unless it's parsing Python source
 > > code.
 > 
 > gotta fix netrc.py then...

  And this still isn't done.  I've made changes in my working copy,
introducting a specific exception which carries useful information
(msg, filename, lineno), so that all syntax exceptions get this
information as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From martin@loewis.home.cs.tu-berlin.de  Tue Mar  6 07:22:58 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 6 Mar 2001 08:22:58 +0100
Subject: [Python-Dev] os module UserDict
Message-ID: <200103060722.f267Mwe01222@mira.informatik.hu-berlin.de>

> I think that the problem is that the whole _Environ stuff should be
> inside the else part of the try/except, but I'm not sure I fully
> understand what goes on.  Could whoever did these mods have a look?

I agree that this patch was broken; the _Environ stuff was in the else
part before. The change was committed by gvanrossum; the checkin
comment says that its author was dschwertberger. 

> Also, it seems that the whole if name != "riscos" is a bit of a
> hack...

I agree. What it seems to say is 'even though riscos does have a
putenv, we cannot/should not/must not wrap environ with a UserDict.'

I'd suggest to back-out this part of the patch, unless a consistent
story can be given RSN.

Regards,
Martin

P.S. os.py mentions an "import riscos". Where is that module?


From jack@oratrix.nl  Tue Mar  6 13:31:12 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Tue, 06 Mar 2001 14:31:12 +0100
Subject: [Python-Dev] __all__ in urllib
Message-ID: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>

The __all__ that was added to urllib recently causes me quite a lot of grief 
(This is "me the application programmer", not "me the macpython maintainer"). 
I have a module that extends urllib, and whereas what used to work was a 
simple "from urllib import *" plus a few override functions, but with this 
__all__ stuff that doesn't work anymore.

I started fixing up __all__, but then I realised that this is probably not the 
right solution. "from xxx import *" can really be used for two completely 
distinct cases. One is as a convenience, where the user doesn't want to prefix 
all references with xxx. but the other distinct case is in a module that is an 
extension of another module. In this second case you would really want to 
bypass this whole __all__ mechanism.

I think that the latter is a valid use case for import *, and that there 
should be some way to get this behaviour.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++




From skip@mojam.com (Skip Montanaro)  Tue Mar  6 13:51:49 2001
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 6 Mar 2001 07:51:49 -0600 (CST)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
Message-ID: <15012.60277.150431.237935@beluga.mojam.com>

    Jack> I started fixing up __all__, but then I realised that this is
    Jack> probably not the right solution. 

    Jack> One is as a convenience, where the user doesn't want to prefix all
    Jack> references with xxx. but the other distinct case is in a module
    Jack> that is an extension of another module. In this second case you
    Jack> would really want to bypass this whole __all__ mechanism.

    Jack> I think that the latter is a valid use case for import *, and that
    Jack> there should be some way to get this behaviour.

Two things come to mind.  One, perhaps a more careful coding of urllib to
avoid exposing names it shouldn't export would be a better choice.  Two,
perhaps those symbols that are not documented but that would be useful when
extending urllib functionality should be documented and added to __all__.

Here are the non-module names I didn't include in urllib.__all__:

    MAXFTPCACHE
    localhost
    thishost
    ftperrors
    noheaders
    ftpwrapper
    addbase
    addclosehook
    addinfo
    addinfourl
    basejoin
    toBytes
    unwrap
    splittype
    splithost
    splituser
    splitpasswd
    splitport
    splitnport
    splitquery
    splittag
    splitattr
    splitvalue
    splitgophertype
    always_safe
    getproxies_environment
    getproxies
    getproxies_registry
    test1
    reporthook
    test
    main

None are documented, so there are no guarantees if you use them (I have
subclassed addinfourl in the past myself).

Skip


From sjoerd@oratrix.nl  Tue Mar  6 16:19:11 2001
From: sjoerd@oratrix.nl (Sjoerd Mullender)
Date: Tue, 06 Mar 2001 17:19:11 +0100
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of Fri, 02 Mar 2001 09:22:27 -0500.
 <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
Message-ID: <20010306161912.54E9A301297@bireme.oratrix.nl>

At the meeting of W3C working groups last week in Cambridge, MA, I saw
that he used Python...

On Fri, Mar 2 2001 Guido van Rossum wrote:

> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88
> 
> Most quotable part: "Python is a language you can get into on one
> battery!"
> 
> We should be able to use that for PR somewhere...
> 
> --Guido van Rossum (home page: http://www.python.org/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> 

-- Sjoerd Mullender <sjoerd.mullender@oratrix.com>


From dietmar@schwertberger.de  Tue Mar  6 22:54:30 2001
From: dietmar@schwertberger.de (Dietmar Schwertberger)
Date: Tue, 6 Mar 2001 23:54:30 +0100 (GMT)
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <200103060722.f267Mwe01222@mira.informatik.hu-berlin.de>
Message-ID: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>

Hi Martin,

thanks for CC'ing to me.

On Tue 06 Mar, Martin v. Loewis wrote:
> > I think that the problem is that the whole _Environ stuff should be
> > inside the else part of the try/except, but I'm not sure I fully
> > understand what goes on.  Could whoever did these mods have a look?
> 
> I agree that this patch was broken; the _Environ stuff was in the else
> part before. The change was committed by gvanrossum; the checkin
> comment says that its author was dschwertberger. 
Yes, it's from me. Unfortunately a whitespace problem with me, my editor
and my diffutils required Guido to apply most of the patches manually...


> > Also, it seems that the whole if name != "riscos" is a bit of a
> > hack...
> 
> I agree. What it seems to say is 'even though riscos does have a
> putenv, we cannot/should not/must not wrap environ with a UserDict.'
> 
> I'd suggest to back-out this part of the patch, unless a consistent
> story can be given RSN.
In plat-riscos there is a different UserDict-like implementation of
environ which is imported at the top of os.py in the 'riscos' part.
'name != "riscos"' just avoids overriding this. Maybe it would have
been better to include riscosenviron._Environ into os.py, as this would
look - and be - less hacky?
I must admit, I didn't care much when I started with riscosenviron.py
by just copying UserDict.py last year.

The RISC OS implementation doesn't store any data itself but just
emulates a dictionary with getenv() and putenv().
This is more suitable for the use of the environment under RISC OS, as
it is used quite heavily for a lot of configuration data and may grow
to some hundred k quite easily. So it is undesirable to import all the
data at startup if it is not required really.
Also the environment is being used for communication between tasks
sometimes (the changes don't just affect subprocesses started later,
but all tasks) and so read access to environ should return the current
value.


And this is just _one_ of the points where RISC OS is different from
the rest of the world...


> Regards,
> Martin
> 
> P.S. os.py mentions an "import riscos". Where is that module?
riscosmodule.c lives in the RISCOS subdirectory together with all the
other RISC OS specific stuff needed for building the binaries.


Regards,

Dietmar

P.S.: How can I subscribe to python-dev (at least read-only)?
      I couldn't find a reference on python.org or Sourceforge.
P.P.S.: If you wonder what RISC OS is and why it is different:
        You may remember the 'Archimedes' from the british
        manufacturer Acorn. This was the first RISC OS computer...



From martin@loewis.home.cs.tu-berlin.de  Wed Mar  7 06:38:52 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 7 Mar 2001 07:38:52 +0100
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>
 (message from Dietmar Schwertberger on Tue, 6 Mar 2001 23:54:30 +0100
 (GMT))
References: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>
Message-ID: <200103070638.f276cqj01518@mira.informatik.hu-berlin.de>

> Yes, it's from me. Unfortunately a whitespace problem with me, my editor
> and my diffutils required Guido to apply most of the patches manually...

I see. What do you think about the patch included below? It also gives
you the default argument to os.getenv, which riscosmodule does not
have.

> In plat-riscos there is a different UserDict-like implementation of
> environ which is imported at the top of os.py in the 'riscos' part.
> 'name != "riscos"' just avoids overriding this. Maybe it would have
> been better to include riscosenviron._Environ into os.py, as this would
> look - and be - less hacky?

No, I think it is good to have the platform-specific code in platform
modules, and only merge them appropiately in os.py.

> P.S.: How can I subscribe to python-dev (at least read-only)?

You can't; it is by invitation only. You can find the archives at

http://mail.python.org/pipermail/python-dev/

Regards,
Martin

Index: os.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/os.py,v
retrieving revision 1.46
diff -u -r1.46 os.py
--- os.py	2001/03/06 15:26:07	1.46
+++ os.py	2001/03/07 06:31:34
@@ -346,17 +346,19 @@
     raise exc, arg
 
 
-if name != "riscos":
-    # Change environ to automatically call putenv() if it exists
-    try:
-        # This will fail if there's no putenv
-        putenv
-    except NameError:
-        pass
-    else:
-        import UserDict
+# Change environ to automatically call putenv() if it exists
+try:
+    # This will fail if there's no putenv
+    putenv
+except NameError:
+    pass
+else:
+    import UserDict
 
-    if name in ('os2', 'nt', 'dos'):  # Where Env Var Names Must Be UPPERCASE
+    if name == "riscos":
+        # On RISC OS, all env access goes through getenv and putenv
+        from riscosenviron import _Environ
+    elif name in ('os2', 'nt', 'dos'):  # Where Env Var Names Must Be UPPERCASE
         # But we store them as upper case
         class _Environ(UserDict.UserDict):
             def __init__(self, environ):
Index: plat-riscos/riscosenviron.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/plat-riscos/riscosenviron.py,v
retrieving revision 1.1
diff -u -r1.1 riscosenviron.py
--- plat-riscos/riscosenviron.py	2001/03/02 05:55:07	1.1
+++ plat-riscos/riscosenviron.py	2001/03/07 06:31:34
@@ -3,7 +3,7 @@
 import riscos
 
 class _Environ:
-    def __init__(self):
+    def __init__(self, initial = None):
         pass
     def __repr__(self):
         return repr(riscos.getenvdict())


From dietmar@schwertberger.de  Wed Mar  7 08:44:54 2001
From: dietmar@schwertberger.de (Dietmar Schwertberger)
Date: Wed, 7 Mar 2001 09:44:54 +0100 (GMT)
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <200103070638.f276cqj01518@mira.informatik.hu-berlin.de>
Message-ID: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>

On Wed 07 Mar, Martin v. Loewis wrote:
> > Yes, it's from me. Unfortunately a whitespace problem with me, my editor
> > and my diffutils required Guido to apply most of the patches manually...
> 
> I see. What do you think about the patch included below? It also gives
> you the default argument to os.getenv, which riscosmodule does not
> have.
Yes, looks good. Thanks.
Please don't forget to replace the 'from riscosenviron import...' statement
from the riscos section at the start of os.py with an empty 'environ' as
there is no environ in riscosmodule.c:
(The following patch also fixes a bug: 'del ce' instead of 'del riscos')

=========================================================================
*diff -c Python-200:$.Python-2/1b1.Lib.os/py SCSI::SCSI4.$.AcornC_C++.Python.!Python.Lib.os/py 
*** Python-200:$.Python-2/1b1.Lib.os/py Fri Mar  2 07:04:51 2001
--- SCSI::SCSI4.$.AcornC_C++.Python.!Python.Lib.os/py Wed Mar  7 08:31:33 2001
***************
*** 160,170 ****
      import riscospath
      path = riscospath
      del riscospath
!     from riscosenviron import environ
  
      import riscos
      __all__.extend(_get_exports_list(riscos))
!     del ce
  
  else:
      raise ImportError, 'no os specific module found'
--- 160,170 ----
      import riscospath
      path = riscospath
      del riscospath
!     environ = {}
  
      import riscos
      __all__.extend(_get_exports_list(riscos))
!     del riscos
  
  else:
      raise ImportError, 'no os specific module found'
========================================================================

If you change riscosenviron.py, would you mind replacing 'setenv' with
'putenv'? It seems '__setitem__' has never been tested...


Regards,

Dietmar



From martin@loewis.home.cs.tu-berlin.de  Wed Mar  7 09:11:46 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 7 Mar 2001 10:11:46 +0100
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>
 (message from Dietmar Schwertberger on Wed, 7 Mar 2001 09:44:54 +0100
 (GMT))
References: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>
Message-ID: <200103070911.f279Bks02780@mira.informatik.hu-berlin.de>

> Please don't forget to replace the 'from riscosenviron import...' statement
> from the riscos section at the start of os.py with an empty 'environ' as
> there is no environ in riscosmodule.c:

There used to be one in riscosenviron, which you had imported. I've
deleted the entire import (trusting that environ will be initialized
later on); and removed the riscosenviron.environ, which now only has
the _Environ class.

> (The following patch also fixes a bug: 'del ce' instead of 'del riscos')

That change was already applied (probably Guido caught the error when
editing the change in).

> If you change riscosenviron.py, would you mind replacing 'setenv' with
> 'putenv'? It seems '__setitem__' has never been tested...

Done.

Martin


From greg@cosc.canterbury.ac.nz  Thu Mar  8 04:06:20 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 08 Mar 2001 17:06:20 +1300 (NZDT)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
Message-ID: <200103080406.RAA04034@s454.cosc.canterbury.ac.nz>

Jack Jansen <jack@oratrix.nl>:

> but the other distinct case is in a module that is an 
> extension of another module. In this second case you would really want to 
> bypass this whole __all__ mechanism.
> 
> I think that the latter is a valid use case for import *, and that there 
> should be some way to get this behaviour.

How about:

  from foo import **

meaning "give me ALL the stuff in module foo, no, really,
I MEAN it" (possibly even including _ names).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From thomas@xs4all.net  Thu Mar  8 23:20:57 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 9 Mar 2001 00:20:57 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/sandbox test.txt,1.1,NONE
In-Reply-To: <E14b2wY-0005VS-00@usw-pr-cvs1.sourceforge.net>; from jackjansen@users.sourceforge.net on Thu, Mar 08, 2001 at 08:07:10AM -0800
References: <E14b2wY-0005VS-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <20010309002057.H404@xs4all.nl>

On Thu, Mar 08, 2001 at 08:07:10AM -0800, Jack Jansen wrote:

> Testing SSH access from the Mac with MacCVS Pro. It seems to work:-)

Oh boy oh boy! Does that mean you'll merge the MacPython tree into the
normal CVS tree ? Don't forget to assign the proper rights to the PSF :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@acm.org  Thu Mar  8 08:28:43 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Thu, 8 Mar 2001 03:28:43 -0500 (EST)
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
Message-ID: <15015.17083.582010.93308@localhost.localdomain>

Phil Thompson writes:
 > Any chance of the attached small patch be applied to enable weak
 > references to functions?
 > 
 > It's particularly useful for lambda functions and closes the "very last
 > loophole where a programmer can cause a PyQt script to seg fault" :)

Phil,
  Can you explain how this would help with the memory issues?  I'd
like to have a better idea of how this would make things work right.
Are there issues with the cyclic GC with respect to the Qt/KDE
bindings?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From phil@river-bank.demon.co.uk  Sat Mar 10 01:20:56 2001
From: phil@river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 01:20:56 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain>
Message-ID: <3AA98178.35B0257D@river-bank.demon.co.uk>

"Fred L. Drake, Jr." wrote:
> 
> Phil Thompson writes:
>  > Any chance of the attached small patch be applied to enable weak
>  > references to functions?
>  >
>  > It's particularly useful for lambda functions and closes the "very last
>  > loophole where a programmer can cause a PyQt script to seg fault" :)
> 
> Phil,
>   Can you explain how this would help with the memory issues?  I'd
> like to have a better idea of how this would make things work right.
> Are there issues with the cyclic GC with respect to the Qt/KDE
> bindings?

Ok, some background...

Qt implements a component model for its widgets. You build applications
by sub-classing the standard widgets and then "connect" them together.
Connections are made between signals and slots - both are defined as
class methods. Connections perform the same function as callbacks in
more traditional GUI toolkits like Xt. Signals/slots have the advantage
of being type safe and the resulting component model is very powerful -
it encourages class designers to build functionally rich component
interfaces.

PyQt supports this model. It also allows slots to be any Python callable
object - usually a class method. You create a connection between a
signal and slot using the "connect" method of the QObject class (from
which all objects that have signals or slots are derived). connect()
*does not* increment the reference count of a slot that is a Python
callable object. This is a design decision - earlier versions did do
this but it almost always results in circular reference counts. The
downside is that, if the slot object no longer exists when the signal is
emitted (because the programmer has forgotten to keep a reference to the
class instance alive) then the usual result is a seg fault. These days,
this is the only way a PyQt programmer can cause a seg fault with bad
code (famous last words!). This accounts for 95% of PyQt programmer's
problem reports.

With Python v2.1, connect() creates a weak reference to the Python
callable slot. When the signal is emitted, PyQt (actually it's SIP)
finds out that the callable has disappeared and takes care not to cause
the seg fault. The problem is that v2.1 only implements weak references
for class instance methods - not for all callables.

Most of the time callables other than instance methods are fairly fixed
- they are unlikely to disappear - not many scripts start deleting
function definitions. The exception, however, is lambda functions. It is
sometimes convenient to define a slot as a lambda function in order to
bind an extra parameter to the slot. Obviously lambda functions are much
more transient than regular functions - a PyQt programmer can easily
forget to make sure a reference to the lambda function stays alive. The
patch I proposed gives the PyQt programmer the same protection for
lambda functions as Python v2.1 gives them for class instance methods.

To be honest, I don't see why weak references have been implemented as a
bolt-on module that only supports one particular object type. The thing
I most like about the Python implementation is how consistent it is.
Weak references should be implemented for every object type - even for
None - you never know when it might come in useful.

As far as cyclic GC is concerned - I've ignored it completely, nobody
has made any complaints - so it either works without any problems, or
none of my user base is using it.

Phil


From skip@mojam.com (Skip Montanaro)  Sat Mar 10 01:49:04 2001
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Fri, 9 Mar 2001 19:49:04 -0600 (CST)
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA98178.35B0257D@river-bank.demon.co.uk>
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
 <15015.17083.582010.93308@localhost.localdomain>
 <3AA98178.35B0257D@river-bank.demon.co.uk>
Message-ID: <15017.34832.44442.981293@beluga.mojam.com>

    Phil> This is a design decision - earlier versions did do this but it
    Phil> almost always results in circular reference counts. 

With cyclic GC couldn't you just let those circular reference counts occur
and rely on the GC machinery to break the cycles?  Or do you have __del__
methods? 

Skip


From paulp@ActiveState.com  Sat Mar 10 02:19:41 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Fri, 09 Mar 2001 18:19:41 -0800
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain> <3AA98178.35B0257D@river-bank.demon.co.uk>
Message-ID: <3AA98F3D.E01AD657@ActiveState.com>

Phil Thompson wrote:
> 
>...
> 
> To be honest, I don't see why weak references have been implemented as a
> bolt-on module that only supports one particular object type. The thing
> I most like about the Python implementation is how consistent it is.
> Weak references should be implemented for every object type - even for
> None - you never know when it might come in useful.

Weak references add a pointer to each object. This could add up for
(e.g.) integers. The idea is that you only pay the cost of weak
references for objects that you would actually create weak references
to.

-- 
Python:
    Programming the way
    Guido
    indented it.


From www.tooltoad.com" <sales@tooltoad.com  Sat Mar 10 05:21:28 2001
From: www.tooltoad.com" <sales@tooltoad.com (www.tooltoad.com)
Date: Sat, 10 Mar 2001 00:21:28 -0500
Subject: [Python-Dev] GRAND OPENING     www.tooltoad.com     GRAND OPENING
Message-ID: <0G9Y00LRDUP62P@mta6.srv.hcvlny.cv.net>

www.tooltoad.com      www.tooltoad.com     www.tooltoad.com

HELLO ,
  
    Please visit us at the GRAND OPENING of www.tooltoad.com

Come and see our ROCK BOTTOM pennies on the dollar pricing . We sell 

electronics , housewares  , security items , tools , and much more . 



    			THANK YOU 
				The management




From phil@river-bank.demon.co.uk  Sat Mar 10 11:06:13 2001
From: phil@river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 11:06:13 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain> <3AA98178.35B0257D@river-bank.demon.co.uk> <3AA98F3D.E01AD657@ActiveState.com>
Message-ID: <3AAA0AA5.1E6983C2@river-bank.demon.co.uk>

Paul Prescod wrote:
> 
> Phil Thompson wrote:
> >
> >...
> >
> > To be honest, I don't see why weak references have been implemented as a
> > bolt-on module that only supports one particular object type. The thing
> > I most like about the Python implementation is how consistent it is.
> > Weak references should be implemented for every object type - even for
> > None - you never know when it might come in useful.
> 
> Weak references add a pointer to each object. This could add up for
> (e.g.) integers. The idea is that you only pay the cost of weak
> references for objects that you would actually create weak references
> to.

Yes I know, and I'm suggesting that people will always find extra uses
for things which the original designers hadn't thought of. Better to be
consistent (and allow weak references to anything) than try and
anticipate (wrongly) how people might want to use it in the future -
although I appreciate that the implementation cost might be too high.
Perhaps the question should be "what types make no sense with weak
references" and exclude them rather than "what types might be able to
use weak references" and include them.

Having said that, my only immediate requirement is to allow weak
refences to functions, and I'd be happy if only that was implemented.

Phil


From phil@river-bank.demon.co.uk  Sat Mar 10 11:06:07 2001
From: phil@river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 11:06:07 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
 <15015.17083.582010.93308@localhost.localdomain>
 <3AA98178.35B0257D@river-bank.demon.co.uk> <15017.34832.44442.981293@beluga.mojam.com>
Message-ID: <3AAA0A9F.FBDE0719@river-bank.demon.co.uk>

Skip Montanaro wrote:
> 
>     Phil> This is a design decision - earlier versions did do this but it
>     Phil> almost always results in circular reference counts.
> 
> With cyclic GC couldn't you just let those circular reference counts occur
> and rely on the GC machinery to break the cycles?  Or do you have __del__
> methods?

Remember I'm ignorant when it comes to cyclic GC - PyQt is older and I
didn't pay much attention to it when it was introduced, so I may be
missing a trick. One thing though, if you have a dialog display and have
a circular reference to it, then you del() the dialog instance - when
will the GC actually get around to resolving the circular reference and
removing the dialog from the screen? It must be guaranteed to do so
before the Qt event loop is re-entered.

Every PyQt class has a __del__ method (because I need to control the
order in which instance "variables" are deleted).

Phil


From guido@digicool.com  Sat Mar 10 20:08:25 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 10 Mar 2001 15:08:25 -0500
Subject: [Python-Dev] Looking for a (paid) reviewer of Python code
Message-ID: <200103102008.PAA05543@cj20424-a.reston1.va.home.com>

I received the mail below; apparently Mr. Amon's problem is that he
needs someone to review a Python program that he ordered written
before he pays the programmer.  Mr. Amon will pay for the review and
has given me permission to forward his message here.  Please write
him at <lramon@earthlink.net>.

--Guido van Rossum (home page: http://www.python.org/~guido/)

------- Forwarded Message

Date:    Wed, 07 Mar 2001 10:58:04 -0500
From:    "Larry Amon" <lramon@earthlink.net>
To:      <guido@python.org>
Subject: Python programs

Hi Guido,

    My name is Larry Amon and I am the President/CEO of SurveyGenie.com. We
have had a relationship with a programmer at Harvard who has been using
Python as his programming language of choice. He tells us that he has this
valuable program that he has developed in Python. Our problem is that we
don't know anyone who knows Python that would be able to verify his claim.
We have funded this guy with our own hard earned money and now he is holding
his program hostage. He is willing to make a deal, but we need to know if
his program is worth anything.

    Do you have any suggestions? You can reach me at lramon@earthlink.net or
you can call me at 941 593 8250.


Regards
Larry Amon
CEO SurveyGenie.com

------- End of Forwarded Message



From pedroni@inf.ethz.ch  Sun Mar 11 02:11:34 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Sun, 11 Mar 2001 03:11:34 +0100
Subject: [Python-Dev] nested scopes and global: some corner cases
Message-ID: <005c01c0a9d0$99ff21e0$ae5821c0@newmexico>

Hi.

Writing nested scopes support for jython (now it passes test_scope and
test_future <wink>),
I have come across these further corner cases for nested scopes mixed with
global decl,
I have tried them with python 2.1b1 and I wonder if the results are consistent
with
the proposed rule:
a free variable is bound according to the nearest outer scope binding
(assign-like or global decl),
class scopes (for backw-comp) are ignored wrt this.

(I)
from __future__ import nested_scopes

x='top'
def ta():
 global x
 def tata():
  exec "x=1" in locals()
  return x # LOAD_NAME
 return tata

print ta()() prints 1, I believed it should print 'top' and a LOAD_GLOBAL
should have been produced.
In this case the global binding is somehow ignored. Note: putting a global decl
in tata xor removing
the exec make tata deliver 'top' as I expected (LOAD_GLOBALs are emitted).
Is this a bug or I'm missing something?

(II)
from __future__ import nested_scopes

x='top'
def ta():
    x='ta'
    class A:
        global x
        def tata(self):
            return x # LOAD_GLOBAL
    return A

print ta()().tata() # -> 'top'

should not the global decl in class scope be ignored and so x be bound to x in
ta,
resulting in 'ta' as output? If one substitutes global x with x='A' that's what
happens.
Or only local binding in class scope should be ignored but global decl not?

regards, Samuele Pedroni



From tim.one@home.com  Sun Mar 11 05:16:38 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 11 Mar 2001 00:16:38 -0500
Subject: [Python-Dev] nested scopes and global: some corner cases
In-Reply-To: <005c01c0a9d0$99ff21e0$ae5821c0@newmexico>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>

[Samuele Pedroni]
> ...
> I have tried them with python 2.1b1 and I wonder if the results
> are consistent with the proposed rule:
> a free variable is bound according to the nearest outer scope binding
> (assign-like or global decl),
> class scopes (for backw-comp) are ignored wrt this.

"exec" and "import*" always complicate everything, though.

> (I)
> from __future__ import nested_scopes
>
> x='top'
> def ta():
>  global x
>  def tata():
>   exec "x=1" in locals()
>   return x # LOAD_NAME
>  return tata
>
> print ta()() prints 1, I believed it should print 'top' and a
> LOAD_GLOBAL should have been produced.

I doubt this will change.  In the presence of exec, the compiler has no idea
what's local anymore, so it deliberately generates LOAD_NAME.  When Guido
says he intends to "deprecate" exec-without-in, he should also always say
"and also deprecate exec in locals()/global() too".  But he'll have to think
about that and get back to you <wink>.

Note that modifications to locals() already have undefined behavior
(according to the Ref Man), so exec-in-locals() is undefined too if the
exec'ed code tries to (re)bind any names.

> In this case the global binding is somehow ignored. Note: putting
> a global decl in tata xor removing the exec make tata deliver 'top' as
> I expected (LOAD_GLOBALs are emitted).
> Is this a bug or I'm missing something?

It's an accident either way (IMO), so it's a bug either way too -- or a
feature either way.  It's basically senseless!  What you're missing is the
layers of hackery in support of exec even before 2.1; this "give up on static
identification of locals entirely in the presence of exec" goes back many
years.

> (II)
> from __future__ import nested_scopes

> x='top'
> def ta():
>     x='ta'
>     class A:
>         global x
>         def tata(self):
>             return x # LOAD_GLOBAL
>     return A
>
> print ta()().tata() # -> 'top'
>
> should not the global decl in class scope be ignored and so x be
> bound to x in ta, resulting in 'ta' as output?

Yes, this one is clearly a bug.  Good catch!



From moshez@zadka.site.co.il  Sun Mar 11 15:19:44 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sun, 11 Mar 2001 17:19:44 +0200 (IST)
Subject: [Python-Dev] Numeric PEPs
Message-ID: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>

Trying once again for the sought after position of "most PEPs on the
planet", here are 3 new PEPs as discussed on the DevDay. These PEPs
are in a large way, taking apart the existing PEP-0228, which served
its strawman (or pie-in-the-sky) purpose well.

Note that according to PEP 0001, the discussion now should be focused
on whether these should be official PEPs, not whether these are to
be accepted. If we decide that these PEPs are good enough to be PEPs
Barry should check them in, fix the internal references between them.
I would also appreciate setting a non-Yahoo list (either SF or python.org)
to discuss those issues -- I'd rather discussion will be there rather
then in my mailbox -- I had bad experience regarding that with PEP-0228.

(See Barry? "send a draft" isn't that scary. Bet you don't like me to
tell other people about it, huh?)

PEP: XXX
Title: Unifying Long Integers and Integers
Version: $Revision$
Author: pep@zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Python has both integers, machine word size integral types, and long 
    integers, unbounded integral types. When integers operations overflow,
    the machine registers, they raise an error. This proposes to do away
    with the distinction, and unify the types from the prespective of both
    the Python interpreter, and the C API.

Rationale

    Having the machine word size leak to the language hinders portability
    (for examples, .pyc's are not portable because of that). Many programs
    find a need to deal with larger numbers after the fact, and changing the
    algorithms later is not only bothersome, but hinders performance on the
    normal case.

Literals

    A trailing 'L' at the end of an integer literal will stop having any
    meaning, and will be eventually phased out. This will be done using
    warnings when encountering such literals. The warning will be off by
    default in Python 2.2, on by default for two revisions, and then will
    no longer be supported.

Builtin Functions

    The function long will call the function int, issuing a warning. The
    warning will be off in 2.2, and on for two revisions before removing
    the function. A FAQ will be added that if there are old modules needing
    this then

         long=int

    At the top would solve this, or

         import __builtin__
         __builtin__.long=int

    In site.py.

C API

    All PyLong_AsX will call PyInt_AsX. If PyInt_AsX does not exist, it will
    be added. Similarly PyLong_FromX. A similar path of warnings as for the
    Python builtins followed.


Overflows

    When an arithmetic operation on two numbers whose internal representation 
    is as a machine-level integers returns something whose internal 
    representation is a bignum, a warning which is turned off by default will
    be issued. This is only a debugging aid, and has no guaranteed semantics.

Implementation

    The PyInt type's slot for a C long will be turned into a 

           union {
               long i;
               digit digits[1];
           };

    Only the n-1 lower bits of the long have any meaning, the top bit is always
    set. This distinguishes the union. All PyInt functions will check this bit
    before deciding which types of operations to use.

Jython Issues

    Jython will have a PyInt interface which is implemented by both from 
    PyFixNum and PyBigNum.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

==========================================
PEP: XXX
Title: Non-integer Division
Version: $Revision$
Author: pep@zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Dividing integers returns the floor of the quantities. This behaviour
    is known as integer division, and is similar to what C and FORTRAN do.
    This has the useful property that all operations on integers return
    integers, but it does tend to put a hump in the learning curve when
    new programmers are surprised that

                  1/2 == 0

    This proposal shows a way to change this will keeping backward 
    compatability issues in mind.

Rationale

    The behaviour of integer division is a major stumbling block found in
    user testing of Python. This manages to trip up new programmers 
    regularily and even causes the experienced programmer to make the
    occasional bugs. The work arounds, like explicitly coerce one of the
    operands to float or use a non-integer literal, are very non-intuitive
    and lower the readability of the program.

// Operator

    A '//' operator which will be introduced, which will call the nb_intdivide
    or __intdiv__ slots. This operator will be implemented in all the Python
    numeric types, and will have the semantics of

                 a // b == floor(a/b)

    Except that the type of a//b will be the type a and b will be coerced
    into (specifically, if a and b are of the same type, a//b will be of that
    type too).

Changing the Semantics of the / Operator

    The nb_divide slot on integers (and long integers, if these are a seperate
    type) will issue a warning when given integers a and b such that

                  a % b != 0

    The warning will be off by default in the 2.2 release, and on by default
    for in the next Python release, and will stay in effect for 24 months.
    The next Python release after 24 months, it will implement

                  (a/b) * b = a (more or less)

    The type of a/b will be either a float or a rational, depending on other
    PEPs.

__future__

    A special opcode, FUTURE_DIV will be added that does the equivalent
    of

        if type(a) in (types.IntType, types.LongType):
             if type(b) in (types.IntType, types.LongType):
                 if a % b != 0:
                      return float(a)/b
        return a/b

    (or rational(a)/b, depending on whether 0.5 is rational or float)

    If "from __future__ import non_integer_division" is present in the
    releases until the IntType nb_divide is changed, the "/" operator is
    compiled to FUTURE_DIV

Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

====================================
PEP: XXX
Title: Adding a Rational Type to Python
Version: $Revision$
Author: pep@zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Python has no number type whose semantics are that of a unboundedly
    precise rational number. This proposal explains the semantics of such
    a type, and suggests builtin functions and literals to support such
    a type. In addition, if division of integers would return a non-integer,
    it could also return a rational type.

Rationale

    While sometimes slower and more memory intensive (in general, unboundedly
    so) rational arithmetic captures more closely the mathematical ideal of
    numbers, and tends to have behaviour which is less surprising to newbies,

RationalType

    This will be a numeric type. The unary operators will do the obvious thing.
    Binary operators will coerce integers and long integers to rationals, and
    rationals to floats and complexes.

    The following attributes will be supported: .numerator, .denominator.
    The language definition will not define other then that

           r.denominator * r == r.numerator

    In particular, no guarantees are made regarding the GCD or the sign of
    the denominator, even though in the proposed implementation, the GCD is
    always 1 and the denominator is always positive.

    The method r.trim(max_denominator) will return the closest rational s to
    r such that abs(s.denominator) <= max_denominator.

The rational() Builtin

    This function will have the signature rational(n, d=1). n and d must both
    be integers, long integers or rationals. A guarantee is made that

            rational(n, d) * d == n

Literals

    Literals conforming to the RE '\d*.\d*' will be rational numbers.

Backwards Compatability

    The only backwards compatible issue is the type of literals mentioned
    above. The following migration is suggested:

    1. from __future__ import rational_literals will cause all such literals
       to be treated as rational numbers.
    2. Python 2.2 will have a warning, turned off by default, about such 
       literals in the absence of such an __future__. The warning message
       will contain information about the __future__ statement, and that
       to get floating point literals, they should be suffixed with "e0".
    3. Python 2.3 will have the warning turned on by default. This warning will
       stay in place for 24 months, at which time the literals will be rationals
       and the warning will be removed.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From pedroni@inf.ethz.ch  Sun Mar 11 16:17:38 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Sun, 11 Mar 2001 17:17:38 +0100
Subject: [Python-Dev] nested scopes and global: some corner cases
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>
Message-ID: <001b01c0aa46$d3dbbd80$f979fea9@newmexico>

Hi.

[Tim Peters on
from __future__ import nested_scopes

x='top'
def ta():
  global x
  def tata():
   exec "x=1" in locals()
   return x # LOAD_NAME vs LOAD_GLOBAL?
  return tata

 print ta()() # 1 vs. 'top' ?
]
-- snip --
> It's an accident either way (IMO), so it's a bug either way too -- or a
> feature either way.  It's basically senseless!  What you're missing is the
> layers of hackery in support of exec even before 2.1; this "give up on static
> identification of locals entirely in the presence of exec" goes back many
> years.
(Just a joke) I'm not such a "newbie" that the guess I'm missing something
is right with probability > .5. At least I hope so.
The same hackery is there in jython codebase
and I have taken much care in preserving it <wink>.

The point is simply that 'exec in locals()' is like a bare exec
but it has been decided to allow 'exec in' even in presence
of nested scopes and we cannot detect the 'locals()' special case
(at compile time) because in python 'locals' is the builtin only with
high probability.

So we face the problem, how to *implement* an undefined behaviour,
(the ref says that changing locals is undef,: everybody knows)
that historically has never been to seg fault, in the new (nested scopes)
context? It also true that what we are doing is "impossible", that's why
it has been decided to raise a SyntaxError in the bare exec case <wink>.

To be honest, I have just implemented things in jython my/some way, and then
discovered that jython CVS version and python 21.b1 (here) behave
differently. A posteriori I just tried to solve/explain things using
the old problem pattern: I give you a (number) sequence, guess the next
term:

the sequence is: (over this jython and python agree)

from __future__ import nested_scopes

def a():
 exec "x=1" in locals()
 return x # LOAD_NAME (jython does the equivalent)

def b():
  global x
  exec "x=1" in locals()
  return x # LOAD_GLOBAL

def c():
 global x
 def cc(): return x # LOAD_GLOBAL
 return cc

def d():
 x='d'
 def dd():
   exec "x=1" in locals() # without 'in locals()' => SynError
   return x # LOAD_DEREF (x in d)
 return dd

and then the term to guess:

def z():
 global x
 def zz():
  exec "x=1" in locals() # without 'in locals()' => SynError
  return x # ???? python guesses LOAD_NAME, jython the equiv of LOAD_GLOBAL
 return zz

Should python and jython agree here too? Anybody wants to spend some time
convincing me that I should change jython meaning of undefined?
I will not spend more time to do the converse <wink>.

regards, Samuele Pedroni.

PS: It is also possible that trying to solve pdb+nested scopes problem we will
have to consider the grab the locals problem with more care.



From paulp@ActiveState.com  Sun Mar 11 19:15:11 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 11:15:11 -0800
Subject: [Python-Dev] mail.python.org down?
Message-ID: <3AABCEBF.1FEC1F9D@ActiveState.com>

>>> urllib.urlopen("http://mail.python.org")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "c:\python20\lib\urllib.py", line 61, in urlopen
    return _urlopener.open(url)
  File "c:\python20\lib\urllib.py", line 166, in open
    return getattr(self, name)(url)
  File "c:\python20\lib\urllib.py", line 273, in open_http
    h.putrequest('GET', selector)
  File "c:\python20\lib\httplib.py", line 425, in putrequest
    self.send(str)
  File "c:\python20\lib\httplib.py", line 367, in send
    self.connect()
  File "c:\python20\lib\httplib.py", line 351, in connect
    self.sock.connect((self.host, self.port))
  File "<string>", line 1, in connect
IOError: [Errno socket error] (10061, 'Connection refused')

-- 
Python:
    Programming the way
    Guido
    indented it.


From tim.one@home.com  Sun Mar 11 19:14:28 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 11 Mar 2001 14:14:28 -0500
Subject: [Python-Dev] Forbidden names & obmalloc.c
Message-ID: <LNBBLJKPBEHFEDALKOLCOEOCJEAA.tim.one@home.com>

In std C, all identifiers that begin with an underscore and are followed by
an underscore or uppercase letter are reserved for the platform C
implementation.  obmalloc.c violates this rule all over the place, spilling
over into objimpl.h's use of _PyCore_ObjectMalloc. _PyCore_ObjectRealloc, and
_PyCore_ObjectFree.  The leading "_Py" there *probably* leaves them safe
despite being forbidden, but things like obmalloc.c's _SYSTEM_MALLOC and
_SET_HOOKS are going to bite us sooner or later (hard to say, but they may
have already, in bug #407680).

I renamed a few of the offending vrbl names, but I don't understand the
intent of the multiple layers of macros in this subsystem.  If anyone else
believes they do, please rename these suckers before the bad names get out
into the world and we have to break user code to repair eventual conflicts
with platforms' uses of these (reserved!) names.



From guido@digicool.com  Sun Mar 11 21:37:14 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 16:37:14 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: Your message of "Sun, 11 Mar 2001 00:16:38 EST."
 <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>
Message-ID: <200103112137.QAA13084@cj20424-a.reston1.va.home.com>

> When Guido
> says he intends to "deprecate" exec-without-in, he should also always say
> "and also deprecate exec in locals()/global() too".  But he'll have to think
> about that and get back to you <wink>.

Actually, I intend to deprecate locals().  For now, globals() are
fine.  I also intend to deprecate vars(), at least in the form that is
equivalent to locals().

> Note that modifications to locals() already have undefined behavior
> (according to the Ref Man), so exec-in-locals() is undefined too if the
> exec'ed code tries to (re)bind any names.

And that's the basis for deprecating it.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Sun Mar 11 22:28:29 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 17:28:29 -0500
Subject: [Python-Dev] mail.python.org down?
In-Reply-To: Your message of "Sun, 11 Mar 2001 11:15:11 PST."
 <3AABCEBF.1FEC1F9D@ActiveState.com>
References: <3AABCEBF.1FEC1F9D@ActiveState.com>
Message-ID: <200103112228.RAA13919@cj20424-a.reston1.va.home.com>

> >>> urllib.urlopen("http://mail.python.org")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
>   File "c:\python20\lib\urllib.py", line 61, in urlopen
>     return _urlopener.open(url)
>   File "c:\python20\lib\urllib.py", line 166, in open
>     return getattr(self, name)(url)
>   File "c:\python20\lib\urllib.py", line 273, in open_http
>     h.putrequest('GET', selector)
>   File "c:\python20\lib\httplib.py", line 425, in putrequest
>     self.send(str)
>   File "c:\python20\lib\httplib.py", line 367, in send
>     self.connect()
>   File "c:\python20\lib\httplib.py", line 351, in connect
>     self.sock.connect((self.host, self.port))
>   File "<string>", line 1, in connect
> IOError: [Errno socket error] (10061, 'Connection refused')

Beats me.  Indeed it is down.  I've notified the folks at DC
responsible for the site.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From paulp@ActiveState.com  Sun Mar 11 23:15:38 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 15:15:38 -0800
Subject: [Python-Dev] mail.python.org down?
References: <3AABCEBF.1FEC1F9D@ActiveState.com> <200103112228.RAA13919@cj20424-a.reston1.va.home.com>
Message-ID: <3AAC071A.799A8B50@ActiveState.com>

Guido van Rossum wrote:
> 
>...
> 
> Beats me.  Indeed it is down.  I've notified the folks at DC
> responsible for the site.

It is fixed now. Thanks!

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From paulp@ActiveState.com  Sun Mar 11 23:23:07 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 15:23:07 -0800
Subject: [Python-Dev] Revive the types sig?
Message-ID: <3AAC08DB.9D4E96B4@ActiveState.com>

I have been involved with the types-sig for a long time and it has
consumed countless hours out of the lives of many brilliant people. I
strongly believe that it will only ever work if we change some of
fundamental assumptions, goals and procedures. At next year's
conference, I do not want to be at the same place in the discussion that
we were this year, and last year, and the year before. The last time I
thought I could make progress through sheer effort. All that did was
burn me out and stress out my wife. We've got to work smarter, not
harder.

The first thing we need to adjust is our terminology and goals. I think
that we should design a *parameter type annotation* system that will
lead directly to better error checking *at runtime*, better
documentation, better development environments an so forth. Checking
types *at compile time* should be considered a tools issue that can be
solved by separate tools. I'm not going to say that Python will NEVER
have a static type checking system but I would say that that shouldn't
be a primary goal.

I've reversed my opinion on this issue. Hey, even Guido makes mistakes.

I think that if the types-sig is going to come up with something
useful this time, we must observe a few principles that have proven
useful in developing Python:

1. Incremental development is okay. You do not need the end-goal in
mind before you begin work. Python today is very different than it was
when it was first developed (not as radically different than some
languages, but still different).

2. It is not necessary to get everything right. Python has some warts.
Some are easier to remove than others but they can all be removed
eventually. We have to get a type system done, test it out, and then
maybe we have to remove the warts. We may not design a perfect gem from
the start. Perfection is a goal, not a requirement.

3. Whatever feature you believe is absolutely necessary to a decent
type system probably is not. There are no right or wrong answers,
only features that work better or worse than other features.

It is important to understand that a dynamically-checked type
annotation system is just a replacement for assertions. Anything that
cannot be expressed in the type system CAN be expressed through
assertions.

For instance one person might claim that the type system needs to
differentiate between 32 bit integers and 64 bit integers. But if we
do not allow that differentiation directly in the type system, they
could do that in assertions. C'est la vie.

This is not unique to Python.  Languages like C++ and Java also have
type test and type assertion operators to "work around" the
limitations of their type systems. If people who have spent their
entire lives inventing static type checking systems cannot come up
with systems that are 100% "complete" then we in the Python world
should not even try. There is nothing wrong with using assertions for
advanced type checks. 

For instance, if you try to come up with a type system that can define
the type of "map" you will probably come up with something so
complicated that it will never be agreed upon or implemented.
(Python's map is much harder to type-declare than that of functional
languages because the function passed in must handle exactly as many
arguments as the unbounded number of sequences that are passed as
arguments to map.)

Even if we took an extreme position and ONLY allowed type annotations
for basic types like strings, numbers and sequences, Python would 
still be a better language. There are thousands of instances of these 
types in the standard library. If we can improve the error checking 
and documentation of these methods we have improved on the status 
quo. Adding type annotations for the other parameters could wait 
for another day.

----

In particular there are three features that have always exploded into
unending debates in the past. I claim that they should temporarily be
set aside while we work out the basics.

 A) Parameterized types (or templates): 

Parameterized types always cause the discussion to spin out of control
as we discuss levels and types of
parameterizability. A type system can be very useful with
parameterization. For instance, Python itself is written in C. C has no
parameterizability. Yet C is obviously still very useful (and simple!).
Java also does not yet have parameterized types and yet it is the most
rapidly growing statically typed programming language!

It is also important to note that parameterized types are much, much
more important in a language that "claims" to catch most or all type
errors at compile time. Python will probably never make that claim.
If you want to do a more sophisticated type check than Python allows,
you should do that in an assertion:

assert Btree.containsType(String)

Once the basic type system is in place, we can discuss the importance
of parameterized types separately later. Once we have attempted to use
Python without them, we will understand our needs better. The type
system should not prohibit the addition of parameterized types in the
future. 

A person could make a strong argument for allowing parameterization
only of basic types ("list of string", "tuple of integers") but I
think that we could even postpone this for the future.

 B) Static type checking: 

Static type warnings are important and we want to enable the development
of tools that will detect type errors before applications are shipped.
Nevertheless, we should not attempt to define a static type checking
system for Python at this point. That may happen in the future or never.

Unlike Java or C++, we should not require the Python interpreter
itself to ever reject code that "might be" type incorrect. Other tools
such as linters and IDEs should handle these forms of whole-program
type-checks.  Rather than defining the behavior of these tools in
advance, we should leave that as a quality of implementation issue for
now.

We might decide to add a formally-defined static type checking to
Python in the future. Dynamically checked annotations give us a
starting point. Once again, I think that the type system should be
defined so that annotations could be used as part of a static type
checking system in the future, should we decide that we want one.

 C) Attribute-value and variable declarations: 

In traditional static type checking systems, it is very important to
declare the type for attributes in a class and variables in a function. 

This feature is useful but it is fairly separable. I believe it should
wait because it brings up a bunch of issues such as read-only
attributes, cross-boundary assignment checks and so forth.

I propose that the first go-round of the types-sig should ONLY address
the issue of function signatures.

Let's discuss my proposal in the types-sig. Executive summary:

 * incremental development policy
 * syntax for parameter type declarations
 * syntax for return type declarations
 * optional runtime type checking
 * goals are better runtime error reporting and method documentation

Deferred for future versions (or never):

 * compile-time type checking
 * parameterized types
 * declarations for variables and attributes

http://www.python.org/sigs/types-sig/

-- 
Python:
    Programming the way
    Guido
    indented it.


From guido@digicool.com  Sun Mar 11 23:25:13 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:25:13 -0500
Subject: [Python-Dev] Unifying Long Integers and Integers
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
 <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>

(I'm splitting this in separate replies per PEP, to focus the
discussion a bit.)

> Trying once again for the sought after position of "most PEPs on the
> planet", here are 3 new PEPs as discussed on the DevDay. These PEPs
> are in a large way, taking apart the existing PEP-0228, which served
> its strawman (or pie-in-the-sky) purpose well.
> 
> Note that according to PEP 0001, the discussion now should be focused
> on whether these should be official PEPs, not whether these are to
> be accepted. If we decide that these PEPs are good enough to be PEPs
> Barry should check them in, fix the internal references between them.

Actually, since you have SF checkin permissions, Barry can just give
you a PEP number and you can check it in yourself!

> I would also appreciate setting a non-Yahoo list (either SF or
> python.org) to discuss those issues -- I'd rather discussion will be
> there rather then in my mailbox -- I had bad experience regarding
> that with PEP-0228.

Please help yourself.  I recommend using SF since it requires less
overhead for the poor python.org sysadmins.

> (See Barry? "send a draft" isn't that scary. Bet you don't like me
> to tell other people about it, huh?)

What was that about?

> PEP: XXX
> Title: Unifying Long Integers and Integers
> Version: $Revision$
> Author: pep@zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Python has both integers, machine word size integral types, and
>     long integers, unbounded integral types. When integers
>     operations overflow, the machine registers, they raise an
>     error. This proposes to do away with the distinction, and unify
>     the types from the prespective of both the Python interpreter,
>     and the C API.
> 
> Rationale
> 
>     Having the machine word size leak to the language hinders
>     portability (for examples, .pyc's are not portable because of
>     that). Many programs find a need to deal with larger numbers
>     after the fact, and changing the algorithms later is not only
>     bothersome, but hinders performance on the normal case.

I'm not sure if the portability of .pyc's is much worse than that of
.py files.  As long as you don't use plain ints >= 2**31 both are 100%
portable.  *programs* can of course become non-portable, but the true
reason for the change is simply that the distinction is arbitrary and
irrelevant.

> Literals
> 
>     A trailing 'L' at the end of an integer literal will stop having
>     any meaning, and will be eventually phased out. This will be
>     done using warnings when encountering such literals. The warning
>     will be off by default in Python 2.2, on by default for two
>     revisions, and then will no longer be supported.

Please suggested a more explicit schedule for introduction, with
approximate dates.  You can assume there will be roughly one 2.x
release every 6 months.

> Builtin Functions
> 
>     The function long will call the function int, issuing a
>     warning. The warning will be off in 2.2, and on for two
>     revisions before removing the function. A FAQ will be added that
>     if there are old modules needing this then
> 
>          long=int
> 
>     At the top would solve this, or
> 
>          import __builtin__
>          __builtin__.long=int
> 
>     In site.py.

There's more to it than that.  What about sys.maxint?  What should it
be set to?  We've got to pick *some* value because there's old code
that uses it.  (An additional problem here is that it's not easy to
issue warnings for using a particular constant.)

Other areas where we need to decide what to do: there are a few
operations that treat plain ints as unsigned: hex() and oct(), and the
format operators "%u", "%o" and "%x".  These have different semantics
for bignums!  (There they ignore the request for unsignedness and
return a signed representation anyway.)

There may be more -- the PEP should strive to eventually list all
issues, although of course it neededn't be complete at first checkin.

> C API
> 
>     All PyLong_AsX will call PyInt_AsX. If PyInt_AsX does not exist,
>     it will be added. Similarly PyLong_FromX. A similar path of
>     warnings as for the Python builtins followed.

May C APIs for other datatypes currently take int or long arguments,
e.g. list indexing and slicing.  I suppose these could stay the same,
or should we provide ways to use longer integers from C as well?

Also, what will you do about PyInt_AS_LONG()?  If PyInt_Check()
returns true for bignums, C code that uses PyInt_Check() and then
assumes that PyInt_AS_LONG() will return a valid outcome is in for a
big surprise!  I'm afraid that we will need to think through the
compatibility strategy for C code more.

> Overflows
> 
>     When an arithmetic operation on two numbers whose internal
>     representation is as a machine-level integers returns something
>     whose internal representation is a bignum, a warning which is
>     turned off by default will be issued. This is only a debugging
>     aid, and has no guaranteed semantics.

Note that the implementation suggested below implies that the overflow
boundary is at a different value than currently -- you take one bit
away from the long.  For backwards compatibility I think that may be
bad...

> Implementation
> 
>     The PyInt type's slot for a C long will be turned into a 
> 
>            union {
>                long i;
>                digit digits[1];
>            };

Almost.  The current bignum implementation actually has a length field
first.

I have an alternative implementation in mind where the type field is
actually different for machine ints and bignums.  Then the existing
int representation can stay, and we lose no bits.  This may have other
implications though, since uses of type(x) == type(1) will be broken.
Once the type/class unification is complete, this could be solved by
making long a subtype of int.

>     Only the n-1 lower bits of the long have any meaning, the top
>     bit is always set. This distinguishes the union. All PyInt
>     functions will check this bit before deciding which types of
>     operations to use.

See above. :-(

> Jython Issues
> 
>     Jython will have a PyInt interface which is implemented by both
>     from PyFixNum and PyBigNum.
> 
> 
> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

All in all, a good start, but needs some work, Moshe!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Sun Mar 11 23:37:37 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:37:37 -0500
Subject: [Python-Dev] Non-integer Division
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
 <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>

Good start, Moshe!  Some comments below.

> PEP: XXX
> Title: Non-integer Division
> Version: $Revision$
> Author: pep@zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Dividing integers returns the floor of the quantities. This
>     behaviour is known as integer division, and is similar to what C
>     and FORTRAN do.  This has the useful property that all
>     operations on integers return integers, but it does tend to put
>     a hump in the learning curve when new programmers are surprised
>     that
> 
>                   1/2 == 0
> 
>     This proposal shows a way to change this will keeping backward 
>     compatability issues in mind.
> 
> Rationale
> 
>     The behaviour of integer division is a major stumbling block
>     found in user testing of Python. This manages to trip up new
>     programmers regularily and even causes the experienced
>     programmer to make the occasional bugs. The work arounds, like
>     explicitly coerce one of the operands to float or use a
>     non-integer literal, are very non-intuitive and lower the
>     readability of the program.

There is a specific kind of example that shows why this is bad.
Python's polymorphism and treatment of mixed-mode arithmetic
(e.g. int+float => float) suggests that functions taking float
arguments and doing some math on them should also be callable with int
arguments.  But sometimes that doesn't work.  For example, in
electronics, Ohm's law suggests that current (I) equals voltage (U)
divided by resistance (R).  So here's a function to calculate the
current:

    >>> def I(U, R):
    ...     return U/R
    ...
    >>> print I(110, 100) # Current through a 100 Ohm resistor at 110 Volt
    1
    >>> 

This answer is wrong! It should be 1.1.  While there's a work-around
(return 1.0*U/R), it's ugly, and moreover because no exception is
raised, simple code testing may not reveal the bug.  I've seen this
reported many times.

> // Operator

Note: we could wind up using a different way to spell this operator,
e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
introduces a new reserved word, with all the issues it creates.  The
disadvantage of '//' is that it means something very different to Java
and C++ users.

>     A '//' operator which will be introduced, which will call the
>     nb_intdivide or __intdiv__ slots. This operator will be
>     implemented in all the Python numeric types, and will have the
>     semantics of
> 
>                  a // b == floor(a/b)
> 
>     Except that the type of a//b will be the type a and b will be
>     coerced into (specifically, if a and b are of the same type,
>     a//b will be of that type too).
> 
> Changing the Semantics of the / Operator
> 
>     The nb_divide slot on integers (and long integers, if these are
>     a seperate type) will issue a warning when given integers a and
>     b such that
> 
>                   a % b != 0
> 
>     The warning will be off by default in the 2.2 release, and on by
>     default for in the next Python release, and will stay in effect
>     for 24 months.  The next Python release after 24 months, it will
>     implement
> 
>                   (a/b) * b = a (more or less)
> 
>     The type of a/b will be either a float or a rational, depending
>     on other PEPs.
> 
> __future__
> 
>     A special opcode, FUTURE_DIV will be added that does the equivalent

Maybe for compatibility of bytecode files we should come up with a
better name, e.g. FLOAT_DIV?

>     of
> 
>         if type(a) in (types.IntType, types.LongType):
>              if type(b) in (types.IntType, types.LongType):
>                  if a % b != 0:
>                       return float(a)/b
>         return a/b
> 
>     (or rational(a)/b, depending on whether 0.5 is rational or float)
> 
>     If "from __future__ import non_integer_division" is present in the
>     releases until the IntType nb_divide is changed, the "/" operator is
>     compiled to FUTURE_DIV

I find "non_integer_division" rather long.  Maybe it should be called
"float_division"?

> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Sun Mar 11 23:55:03 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:55:03 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
 <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>

Here's the third installment -- my response to Moshe's rational
numbers PEP.

I believe that a fourth PEP should be written as well: decimal
floating point.  Maybe Tim can draft this?

> PEP: XXX
> Title: Adding a Rational Type to Python
> Version: $Revision$
> Author: pep@zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Python has no number type whose semantics are that of a
>     unboundedly precise rational number.

But one could easily be added to the standard library, and several
implementations exist, including one in the standard distribution:
Demo/classes/Rat.py.

>     This proposal explains the
>     semantics of such a type, and suggests builtin functions and
>     literals to support such a type. In addition, if division of
>     integers would return a non-integer, it could also return a
>     rational type.

It's kind of sneaky not to mention in the abstract that this should be
the default representation for numbers containing a decimal point,
replacing most use of floats!

> Rationale
> 
>     While sometimes slower and more memory intensive (in general,
>     unboundedly so) rational arithmetic captures more closely the
>     mathematical ideal of numbers, and tends to have behaviour which
>     is less surprising to newbies,

This PEP definitely needs a section of arguments Pro and Con.  For
Con, mention at least that rational arithmetic is much slower than
floating point, and can become *very* much slower when algorithms
aren't coded carefully.  Now, naively coded algorithms often don't
work well with floats either, but there is a lot of cultural knowledge
about defensive programming with floats, which is easily accessible to
newbies -- similar information about coding with rationals is much
less easily accessible, because no mainstream languages have used
rationals before.  (I suppose Common Lisp has rationals, since it has
everything, but I doubt that it uses them by default for numbers with
a decimal point.)

> RationalType
> 
>     This will be a numeric type. The unary operators will do the
>     obvious thing.  Binary operators will coerce integers and long
>     integers to rationals, and rationals to floats and complexes.
>
>     The following attributes will be supported: .numerator,
>     .denominator.  The language definition will not define other
>     then that
> 
>            r.denominator * r == r.numerator
> 
>     In particular, no guarantees are made regarding the GCD or the
>     sign of the denominator, even though in the proposed
>     implementation, the GCD is always 1 and the denominator is
>     always positive.
>
>     The method r.trim(max_denominator) will return the closest
>     rational s to r such that abs(s.denominator) <= max_denominator.
> 
> The rational() Builtin
> 
>     This function will have the signature rational(n, d=1). n and d
>     must both be integers, long integers or rationals. A guarantee
>     is made that
> 
>             rational(n, d) * d == n
> 
> Literals
> 
>     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> 
> Backwards Compatability
> 
>     The only backwards compatible issue is the type of literals
>     mentioned above. The following migration is suggested:
> 
>     1. from __future__ import rational_literals will cause all such
>        literals to be treated as rational numbers.
>     2. Python 2.2 will have a warning, turned off by default, about
>        such literals in the absence of such an __future__. The
>        warning message will contain information about the __future__
>        statement, and that to get floating point literals, they
>        should be suffixed with "e0".
>     3. Python 2.3 will have the warning turned on by default. This
>        warning will stay in place for 24 months, at which time the
>        literals will be rationals and the warning will be removed.

There are also backwards compatibility issues at the C level.

Question: the time module's time() function currently returns a
float.  Should it return a rational instead?  This is a trick question.

> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

--Guido van Rossum (home page: http://www.python.org/~guido/)


From moshez@zadka.site.co.il  Mon Mar 12 00:25:23 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 02:25:23 +0200 (IST)
Subject: [Python-Dev] Re: Unifying Long Integers and Integers
In-Reply-To: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>
References: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido@digicool.com> wrote:

> Actually, since you have SF checkin permissions, Barry can just give
> you a PEP number and you can check it in yourself!

Technically yes. I'd rather Barry would change PEP-0000 himself ---
if he's ready to do that and let me check in the PEPs it's fine, but
I just imagined he'd like to keep the state consistent.

[re: numerical PEPs mailing list] 
> Please help yourself.  I recommend using SF since it requires less
> overhead for the poor python.org sysadmins.

Err...I can't. Requesting an SF mailing list is an admin operation.

[re: portablity of literals]
> I'm not sure if the portability of .pyc's is much worse than that of
> .py files.

Of course, .py's and .pyc's is just as portable. I do think that this
helps programs be more portable when they have literals inside them,
especially since (I believe) that soon the world will be a mixture of
32 bit and 64 bit machines.

> There's more to it than that.  What about sys.maxint?  What should it
> be set to?

I think I'd like to stuff this one "open issues" and ask people to 
grep through code searching for sys.maxint before I decide.

Grepping through the standard library shows that this is most often
use as a maximum size for sequences. So, I think it should be probably
the maximum size of an integer type large enough to hold a pointer.
(the only exception is mhlib.py, and it uses it when int(string) gives an
OverflowError, which it would stop so the code would be unreachable)

> Other areas where we need to decide what to do: there are a few
> operations that treat plain ints as unsigned: hex() and oct(), and the
> format operators "%u", "%o" and "%x".  These have different semantics
> for bignums!  (There they ignore the request for unsignedness and
> return a signed representation anyway.)

This would probably be solved by the fact that after the change 1<<31
will be positive. The real problem is that << stops having 32 bit semantics --
but it never really had those anyway, it had machine-long-size semantics,
which were unportable, so we can just people with unportable code to fix
it.

What do you think? Should I issue a warning on shifting an integer so
it would be cut/signed in the old semantics?

> May C APIs for other datatypes currently take int or long arguments,
> e.g. list indexing and slicing.  I suppose these could stay the same,
> or should we provide ways to use longer integers from C as well?

Hmmmm....I'd probably add PyInt_AS_LONG_LONG under an #ifdef HAVE_LONG_LONG

> Also, what will you do about PyInt_AS_LONG()?  If PyInt_Check()
> returns true for bignums, C code that uses PyInt_Check() and then
> assumes that PyInt_AS_LONG() will return a valid outcome is in for a
> big surprise!

Yes, that's a problem. I have no immediate solution to that -- I'll
add it to the list of open issues.

> Note that the implementation suggested below implies that the overflow
> boundary is at a different value than currently -- you take one bit
> away from the long.  For backwards compatibility I think that may be
> bad...

It also means overflow raises a different exception. Again, I suspect
it will be used only in cases where the algorithm is supposed to maintain
that internal results are not bigger then the inputs or things like that,
and there only as a debugging aid -- so I don't think that this would be this
bad. And if people want to avoid using the longs for performance reasons,
then the implementation should definitely *not* lie to them.

> Almost.  The current bignum implementation actually has a length field
> first.

My bad. ;-)

> I have an alternative implementation in mind where the type field is
> actually different for machine ints and bignums.  Then the existing
> int representation can stay, and we lose no bits.  This may have other
> implications though, since uses of type(x) == type(1) will be broken.
> Once the type/class unification is complete, this could be solved by
> making long a subtype of int.

OK, so what's the concrete advice? How about if I just said "integer operations
that previously raised OverflowError now return long integers, and literals
in programs that are too big to be integers are long integers?". I started
leaning this way when I started writing the PEP and decided that true 
unification may not be the low hanging fruit we always assumed it would be.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From moshez@zadka.site.co.il  Mon Mar 12 00:36:58 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 02:36:58 +0200 (IST)
Subject: [Python-Dev] Re: Non-integer Division
In-Reply-To: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>
References: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312003658.01096AA27@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido@digicool.com> wrote:

> > // Operator
> 
> Note: we could wind up using a different way to spell this operator,
> e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
> introduces a new reserved word, with all the issues it creates.  The
> disadvantage of '//' is that it means something very different to Java
> and C++ users.

I have zero (0) intuition about what is better. You choose --- I have
no opinions on this. If we do go the "div" route, I need to also think
up a syntactic migration path once I figure out the parsing issues
involved. This isn't an argument -- just something you might want to 
consider before pronouncing on "div".

> Maybe for compatibility of bytecode files we should come up with a
> better name, e.g. FLOAT_DIV?

Hmmmm.....a bytecode files so far have failed to be compatible for
any revision. I have no problems with that, just that I feel that if
we're serious about comptability, we should say so, and if we're not,
then half-assed measures will not help.

[re: from __future__ import non_integer_division] 
> I find "non_integer_division" rather long.  Maybe it should be called
> "float_division"?

I have no problems with that -- except that if the rational PEP is accepted,
then this would rational_integer_division, and I didn't want to commit
myself yet.

You haven't commented yet about the rational PEP, so I don't know if that's
even an option.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From moshez@zadka.site.co.il  Mon Mar 12 01:00:25 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 03:00:25 +0200 (IST)
Subject: [Python-Dev] Re: Adding a Rational Type to Python
In-Reply-To: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
References: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido@digicool.com> wrote:

> I believe that a fourth PEP should be written as well: decimal
> floating point.  Maybe Tim can draft this?

Better. I have very little decimal point experience, and in any way
I'd find it hard to write a PEP I don't believe it. However, I would
rather that it be written if only to be officially rejected, so if
no one volunteers to write it, I'm willing to do it anyway.
(Besides, I might manage to actually overtake Jeremy in number of PEPs
if I do this)

> It's kind of sneaky not to mention in the abstract that this should be
> the default representation for numbers containing a decimal point,
> replacing most use of floats!

I beg the mercy of the court. This was here, but got lost in the editing.
I've put it back.

> This PEP definitely needs a section of arguments Pro and Con.  For
> Con, mention at least that rational arithmetic is much slower than
> floating point, and can become *very* much slower when algorithms
> aren't coded carefully.

Note that I did try to help with coding carefully by adding the ".trim"
method.

> There are also backwards compatibility issues at the C level.

Hmmmmm....what are those? Very few c functions explicitly expect a
float, and the responsibility here can be pushed off to the Python
programmer by having to use explicit floats. For the others, PyArg_ParseTuple
can just coerce to float with the "d" type.

> Question: the time module's time() function currently returns a
> float.  Should it return a rational instead?  This is a trick question.

It should return the most exact number the underlying operating system
supports. For example, in OSes supporting gettimeofday, return a rational
built from tv_sec and tv_usec.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From jeremy@alum.mit.edu  Mon Mar 12 01:22:04 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Sun, 11 Mar 2001 20:22:04 -0500 (EST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <3AAC08DB.9D4E96B4@ActiveState.com>
References: <3AAC08DB.9D4E96B4@ActiveState.com>
Message-ID: <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "PP" == Paul Prescod <paulp@ActiveState.com> writes:

  PP> Let's discuss my proposal in the types-sig. Executive summary:

  PP> * incremental development policy
  PP> * syntax for parameter type declarations
  PP> * syntax for return type declarations
  PP> * optional runtime type checking
  PP> * goals are better runtime error reporting and method
  PP>    documentation

If your goal is really the last one, then I don't see why we need the
first four <0.9 wink>.  Let's take this to the doc-sig.

I have never felt that Python's runtime error reporting is all that
bad.  Can you provide some more motivation for this concern?  Do you
have any examples of obscure errors that will be made clearer via type
declarations?

The best example I can think of for bad runtime error reporting is a
function that expects a sequence (perhaps of strings) and is passed a
string.  Since a string is a sequence, the argument is treated as a
sequence of length-1 strings.  I'm not sure how type declarations
help, because:

    (1) You would usually want to say that the function accepts a
        sequence -- and that doesn't get you any farther.

    (2) You would often want to say that the type of the elements of
        the sequence doesn't matter -- like len -- or that the type of
        the elements matters but the function is polymorphic -- like
        min.  In either case, you seem to be ruling out types for
        these very common sorts of functions.

If documentation is really the problem you want to solve, I imagine
we'd make much more progress if we could agree on a javadoc-style
format for documentation.  The ability to add return-type declarations
to functions and methods doesn't seem like much of a win.

Jeremy


From pedroni@inf.ethz.ch  Mon Mar 12 01:34:52 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 02:34:52 +0100
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>  <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <003f01c0aa94$a3be18c0$325821c0@newmexico>

Hi.

[GvR]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().
>
That's fine for me. Will that deprecation be already active with 2.1, e.g
having locals() and param-less vars() raise a warning.
I imagine a (new) function that produce a snap-shot of the values in the
local,free and
cell vars of a scope can do the job required for simple debugging (the copy
will not allow
to modify back the values), or another approach...

regards, Samuele Pedroni



From pedroni@inf.ethz.ch  Mon Mar 12 01:39:51 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 02:39:51 +0100
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>  <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <001c01c0aa95$55836f60$325821c0@newmexico>

Hi.

[GvR]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().
>
That's fine for me. Will that deprecation be already active with 2.1, e.g
having locals() and param-less vars() raise a warning.
I imagine a (new) function that produce a snap-shot of the values in the
local,free and cell vars of a scope can do the job required for simple 
debugging (the copy will not allow to modify back the values), 
or another approach...

In the meantime (if there's a meantime) is ok for jython to behave
the way I have explained or not? 
wrt to exec+locals()+global+nested scopes .

regards, Samuele Pedroni



From michel@digicool.com  Mon Mar 12 02:05:48 2001
From: michel@digicool.com (Michel Pelletier)
Date: Sun, 11 Mar 2001 18:05:48 -0800 (PST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <3AAC08DB.9D4E96B4@ActiveState.com>
Message-ID: <Pine.LNX.4.32.0103111745440.887-100000@localhost.localdomain>

On Sun, 11 Mar 2001, Paul Prescod wrote:

> Let's discuss my proposal in the types-sig. Executive summary:
>
>  * incremental development policy
>  * syntax for parameter type declarations
>  * syntax for return type declarations
>  * optional runtime type checking
>  * goals are better runtime error reporting and method documentation

I could be way over my head here, but I'll try to give you my ideas.

I've read the past proposals for type declarations and their
syntax, and I've also read a good bit of the types-sig archive.

I feel that there is not as much benefit to extending type declarations
into the language as their is to interfaces.  I feel this way because I'm
not sure what benefit this has over an object that describes the types you
are expecting and is associated with your object (like an interface).

The upshot of having an interface describe your expected parameter and
return types is that the type checking can be made as compile/run-time,
optional/madatory as you want without changing the language or your
implementation at all.  "Strong" checking could be done during testing,
and no checking at all during production, and any level in between.

A disadvantage of an interface is that it is a seperate, additional step
over just writing code (as are any type assertions in the language, but
those are "easier"  inline with the implementation).  But this
disadvantage is also an upshot when you immagine that the interface could
be developed later, and bolted onto the implementation later without
changing the implementation.

Also, type checking in general is good, but what about preconditions (this
parameter must be an int > 5 < 10) and postconditions and other conditions
one does now with assertions.  Would these be more language extensions in
your propsal?

As I see it, interfaces satify your first point, remove the need for your
second and third point, satify your fourth point, and meet the goals of
your fifth.

Nice to meet you at the conference,

-Michel




From greg@cosc.canterbury.ac.nz  Mon Mar 12 03:10:19 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 12 Mar 2001 16:10:19 +1300 (NZDT)
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: <003f01c0aa94$a3be18c0$325821c0@newmexico>
Message-ID: <200103120310.QAA04837@s454.cosc.canterbury.ac.nz>

Samuele Pedroni <pedroni@inf.ethz.ch>:

> I imagine a (new) function that produce a snap-shot of the values in
> the local,free and cell vars of a scope can do the job required for
> simple debugging (the copy will not allow to modify back the values)

Modifying the values doesn't cause any problem, only
adding new names to the scope. So locals() or whatever
replaces it could return a mapping object that doesn't 
allow adding any keys.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim.one@home.com  Mon Mar 12 03:25:56 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 11 Mar 2001 22:25:56 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPGJEAA.tim.one@home.com>

[Guido]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().

OK by me.  Note that we agreed long ago that if nested scopes ever made it
in, we would need to supply a way to get a "namespace mapping" object so that
stuff like:

    print "The value of i is %(i)s and j %(j)s" % locals()

could be replaced by:

    print "The value of i is %(i)s and j %(j)s" % namespace_map_object()

Also agreed this need not be a dict; fine by me if it's immutable too.



From ping@lfw.org  Mon Mar 12 05:01:49 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Sun, 11 Mar 2001 21:01:49 -0800 (PST)
Subject: [Python-Dev] Re: Deprecating locals() (was Re: nested scopes and global: some
 corner cases)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEPGJEAA.tim.one@home.com>
Message-ID: <Pine.LNX.4.10.10103112056010.13108-100000@skuld.kingmanhall.org>

On Sun, 11 Mar 2001, Tim Peters wrote:
> OK by me.  Note that we agreed long ago that if nested scopes ever made it
> in, we would need to supply a way to get a "namespace mapping" object so that
> stuff like:
> 
>     print "The value of i is %(i)s and j %(j)s" % locals()
> 
> could be replaced by:
> 
>     print "The value of i is %(i)s and j %(j)s" % namespace_map_object()

I remarked to Jeremy at Python 9 that, given that we have new
variable lookup rules, there should be an API to perform this
lookup.  I suggested that a new method on frame objects would
be a good idea, and Jeremy & Barry seemed to agree.

I was originally thinking of frame.lookup('whatever'), but if
that method happens to be tp_getitem, then i suppose

    print "i is %(i)s and j is %(j)s" % sys.getframe()

would work.  We could call it something else, but one way or
another it's clear to me that this object has to follow lookup
rules that are completely consistent with whatever kind of
scoping is in effect (i.e. throw out *both* globals() and
locals() and provide one function that looks up the whole set
of visible names, rather than just one scope's contents).


-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso



From ping@lfw.org  Mon Mar 12 05:18:06 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Sun, 11 Mar 2001 21:18:06 -0800 (PST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <Pine.LNX.4.32.0103111745440.887-100000@localhost.localdomain>
Message-ID: <Pine.LNX.4.10.10103112102030.13108-100000@skuld.kingmanhall.org>

On Sun, 11 Mar 2001, Michel Pelletier wrote:
> As I see it, interfaces satify your first point, remove the need for your
> second and third point, satify your fourth point, and meet the goals of
> your fifth.

For the record, here is a little idea i came up with on the
last day of the conference:

Suppose there is a built-in class called "Interface" with the
special property that whenever any immediate descendant of
Interface is sub-classed, we check to make sure all of its
methods are overridden.  If any methods are not overridden,
something like InterfaceException is raised.

This would be sufficient to provide very simple interfaces,
at least in terms of what methods are part of an interface
(it wouldn't do any type checking, but it could go a step
further and check the number of arguments on each method).

Example:

    >>> class Spam(Interface):
    ...     def islovely(self): pass
    ...
    >>> Spam()
    TypeError: interfaces cannot be instantiated
    >>> class Eggs(Spam):
    ...     def scramble(self): pass
    ...
    InterfaceError: class Eggs does not implement interface Spam
    >>> class LovelySpam(Spam):
    ...     def islovely(self): return 1
    ...
    >>> LovelySpam()
    <LovelySpam instance at ...>

Essentially this would replace the convention of writing a
whole bunch of methods that raise NotImplementedError as a
way of describing an abstract interface, making it a bit easier
to write and causing interfaces to be checked earlier (upon
subclassing, rather than upon method call).

It should be possible to implement this in Python using metaclasses.


-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso



From uche.ogbuji@fourthought.com  Mon Mar 12 07:11:27 2001
From: uche.ogbuji@fourthought.com (Uche Ogbuji)
Date: Mon, 12 Mar 2001 00:11:27 -0700
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: Message from Jeremy Hylton <jeremy@alum.mit.edu>
 of "Sun, 11 Mar 2001 20:22:04 EST." <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103120711.AAA09711@localhost.localdomain>

Jeremy Hylton:

> If documentation is really the problem you want to solve, I imagine
> we'd make much more progress if we could agree on a javadoc-style
> format for documentation.  The ability to add return-type declarations
> to functions and methods doesn't seem like much of a win.

I know this isn't the types SIG and all, but since it has come up here, I'd 
like to (once again) express my violent disagreement with the efforts to add 
static typing to Python.  After this, I won't pursue the thread further here.

I used to agree with John Max Skaller that if any such beast were needed, it 
should be a more general system for asserting correctness, but I now realize 
that even that avenue might lead to madness.

Python provides more than enough power for any programmer to impose their own 
correctness tests, including those for type-safety.  Paul has pointed out to 
me that the goal of the types SIG is some mechanism that would not affect 
those of us who want nothing to do with static typing; but my fear is that 
once the decision is made to come up with something, such considerations might 
be the first out the window.  Indeed, the last round of talks produced some 
very outre proposals.

Type errors are not even close to the majority of those I make while 
programming in Python, and I'm quite certain that the code I've written in 
Python is much less buggy than code I've written in strongly-typed languages.  
Expressiveness, IMO, is a far better aid to correctness than artificial 
restrictions (see Java for the example of school-marm programming gone amok).

If I understand Jeremy correctly, I am in strong agreement that it is at least 
worth trying the structured documentation approach to signalling pre- and 
post-conditions before turning Python into a rather different language.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji@fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python




From tim.one@home.com  Mon Mar 12 07:30:03 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 12 Mar 2001 02:30:03 -0500
Subject: [Python-Dev] RE: Revive the types sig?
In-Reply-To: <200103120711.AAA09711@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEACJFAA.tim.one@home.com>

Could we please prune followups on this to the Types-SIG now?  I don't really
need to see three copies of every msg, and everyone who has the slightest
interest in the topic should already be on the Types-SIG.

grumpily y'rs  - tim



From mwh21@cam.ac.uk  Mon Mar 12 08:24:03 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 08:24:03 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Guido van Rossum's message of "Sun, 11 Mar 2001 18:55:03 -0500"
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
Message-ID: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido@digicool.com> writes:

> Here's the third installment -- my response to Moshe's rational
> numbers PEP.

I'm replying to Guido mainly through laziness.

> > PEP: XXX
> > Title: Adding a Rational Type to Python
> > Version: $Revision$
> > Author: pep@zadka.site.co.il (Moshe Zadka)
> > Status: Draft
> > Python-Version: 2.2
> > Type: Standards Track
> > Created: 11-Mar-2001
> > Post-History:
> > 
> > 
> > Abstract
> > 
> >     Python has no number type whose semantics are that of a
> >     unboundedly precise rational number.
> 
> But one could easily be added to the standard library, and several
> implementations exist, including one in the standard distribution:
> Demo/classes/Rat.py.
> 
> >     This proposal explains the
> >     semantics of such a type, and suggests builtin functions and
> >     literals to support such a type. In addition, if division of
> >     integers would return a non-integer, it could also return a
> >     rational type.
> 
> It's kind of sneaky not to mention in the abstract that this should be
> the default representation for numbers containing a decimal point,
> replacing most use of floats!

If "/" on integers returns a rational (as I presume it will if
rationals get in as it's the only sane return type), then can we
please have the default way of writing rationals as "p/q"?  OK, so it
might be inefficient (a la complex numbers), but it should be trivial
to optimize if required.

Having ddd.ddd be a rational bothers me.  *No* langauge does that at
present, do they?  Also, writing rational numbers as decimal floats
strikes me s a bit loopy.  Is 

  0.33333333

1/3 or 3333333/10000000?

Certainly, if it's to go in, I'd like to see

> > Literals
> > 
> >     Literals conforming to the RE '\d*.\d*' will be rational numbers.

in the PEP as justification.

Cheers,
M.

-- 
  MAN:  How can I tell that the past isn't a fiction designed to
        account for the discrepancy between my immediate physical
        sensations and my state of mind?
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 12



From tim.one@home.com  Mon Mar 12 08:52:49 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 12 Mar 2001 03:52:49 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com>

[Michael Hudson]
> ...
> Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> present, do they?

ABC (Python's closest predecessor) did.  6.02e23 and 1.073242e-301 were also
exact rationals.  *All* numeric literals were.  This explains why they aren't
in Python, but doesn't explain exactly why:  i.e., it didn't work well in
ABC, but it's unclear whether that's because rationals suck, or because you
got rationals even when 10,000 years of computer history <wink> told you that
"." would get you something else.

> Also, writing rational numbers as decimal floats strikes me as a
> bit loopy.  Is
>
>   0.33333333
>
> 1/3 or 3333333/10000000?

Neither, it's 33333333/100000000 (which is what I expect you intended for
your 2nd choice).  Else

    0.33333333 == 33333333/100000000

would be false, and

    0.33333333 * 3 == 1

would be true, and those are absurd if both sides are taken as rational
notations.  OTOH, it's possible to do rational<->string conversion with an
extended notation for "repeating decimals", e.g.

   str(1/3) == "0.(3)"
   eval("0.(3)") == 1/3

would be possible (indeed, I've implemented it in my own rational classes,
but not by default since identifying "the repeating part" in rat->string can
take space proportional to the magnitude of the denominator).

but-"."-is-mnemonic-for-the-"point"-in-"floating-point"-ly y'rs  - tim



From moshez@zadka.site.co.il  Mon Mar 12 11:51:36 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 13:51:36 +0200 (IST)
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
Message-ID: <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>

On 12 Mar 2001 08:24:03 +0000, Michael Hudson <mwh21@cam.ac.uk> wrote:
 
> If "/" on integers returns a rational (as I presume it will if
> rationals get in as it's the only sane return type), then can we
> please have the default way of writing rationals as "p/q"?

That's proposed in a different PEP. Personally (*shock*) I'd like
all my PEPs to go in, but we sort of agreed that they will only
get in if they can get in in seperate pieces.
  
> Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> present, do they?  Also, writing rational numbers as decimal floats
> strikes me s a bit loopy.  Is 
> 
>   0.33333333
> 
> 1/3 or 3333333/10000000?

The later. But decimal numbers *are* rationals...just the denominator
is always a power of 10.

> Certainly, if it's to go in, I'd like to see
> 
> > > Literals
> > > 
> > >     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> 
> in the PEP as justification.
 
I'm not understanding you. Do you think it needs more justification, or
that it is justification for something?
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From mwh21@cam.ac.uk  Mon Mar 12 12:03:17 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 12:03:17 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: "Tim Peters"'s message of "Mon, 12 Mar 2001 03:52:49 -0500"
References: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com>
Message-ID: <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one@home.com> writes:

> [Michael Hudson]
> > ...
> > Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> > present, do they?
> 
> ABC (Python's closest predecessor) did.  6.02e23 and 1.073242e-301
> were also exact rationals.  *All* numeric literals were.  This
> explains why they aren't in Python, but doesn't explain exactly why:
> i.e., it didn't work well in ABC, but it's unclear whether that's
> because rationals suck, or because you got rationals even when
> 10,000 years of computer history <wink> told you that "." would get
> you something else.

Well, it seems likely that it wouldn't work in Python too, doesn't it?
Especially with 10010 years of computer history.

> > Also, writing rational numbers as decimal floats strikes me as a
> > bit loopy.  Is
> >
> >   0.33333333
> >
> > 1/3 or 3333333/10000000?
> 
> Neither, it's 33333333/100000000 (which is what I expect you intended for
> your 2nd choice).

Err, yes.  I was feeling too lazy to count 0's.

[snip]
> OTOH, it's possible to do rational<->string conversion with an
> extended notation for "repeating decimals", e.g.
> 
>    str(1/3) == "0.(3)"
>    eval("0.(3)") == 1/3
> 
> would be possible (indeed, I've implemented it in my own rational
> classes, but not by default since identifying "the repeating part"
> in rat->string can take space proportional to the magnitude of the
> denominator).

Hmm, I wonder what the repr of rational(1,3) is...

> but-"."-is-mnemonic-for-the-"point"-in-"floating-point"-ly y'rs  - tim

Quite.

Cheers,
M.

-- 
  Slim Shady is fed up with your shit, and he's going to kill you.
                         -- Eminem, "Public Service Announcement 2000"



From mwh21@cam.ac.uk  Mon Mar 12 12:07:19 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 12:07:19 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Moshe Zadka's message of "Mon, 12 Mar 2001 13:51:36 +0200 (IST)"
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk> <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <m3wv9v6vig.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez@zadka.site.co.il> writes:

> On 12 Mar 2001 08:24:03 +0000, Michael Hudson <mwh21@cam.ac.uk> wrote:
>  
> > If "/" on integers returns a rational (as I presume it will if
> > rationals get in as it's the only sane return type), then can we
> > please have the default way of writing rationals as "p/q"?
> 
> That's proposed in a different PEP. Personally (*shock*) I'd like
> all my PEPs to go in, but we sort of agreed that they will only
> get in if they can get in in seperate pieces.

Fair enough.

> > Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> > present, do they?  Also, writing rational numbers as decimal floats
> > strikes me s a bit loopy.  Is 
> > 
> >   0.33333333
> > 
> > 1/3 or 3333333/10000000?
> 
> The later. But decimal numbers *are* rationals...just the denominator
> is always a power of 10.

Well, floating point numbers are rationals too, only the denominator
is always a power of 2 (or sixteen, if you're really lucky).

I suppose I don't have any rational (groan) objections, but it just
strikes me instinctively as a Bad Idea.

> > Certainly, if it's to go in, I'd like to see
                                                 ^
                                             "more than"
sorry.

> > > > Literals
> > > > 
> > > >     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> > 
> > in the PEP as justification.
>  
> I'm not understanding you. Do you think it needs more justification,
> or that it is justification for something?

I think it needs more justification.

Well, actually I think it should be dropped, but if that's not going
to happen, then it needs more justification.

Cheers,
M.

-- 
  To summarise the summary of the summary:- people are a problem.
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 12



From paulp@ActiveState.com  Mon Mar 12 12:27:29 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 04:27:29 -0800
Subject: [Python-Dev] Adding a Rational Type to Python
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <3AACC0B1.4AD48247@ActiveState.com>

Whether or not Python adopts rationals as the default number type, a
rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
2.2.

I think that Python users should be allowed to experiment with it before
it becomes the default. If I recode my existing programs to use
rationals and they experience an exponential slow-down, that might
influence my recommendation to Guido. 
-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From thomas@xs4all.net  Mon Mar 12 13:16:00 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 14:16:00 +0100
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>; from mwh21@cam.ac.uk on Mon, Mar 12, 2001 at 12:03:17PM +0000
References: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com> <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <20010312141600.Q404@xs4all.nl>

On Mon, Mar 12, 2001 at 12:03:17PM +0000, Michael Hudson wrote:

> Hmm, I wonder what the repr of rational(1,3) is...

Well, 'rational(1,3)', of course. Unless 1/3 returns a rational, in which
case it can just return '1/3' :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@digicool.com  Mon Mar 12 13:51:22 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 08:51:22 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:39:51 +0100."
 <001c01c0aa95$55836f60$325821c0@newmexico>
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com> <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
 <001c01c0aa95$55836f60$325821c0@newmexico>
Message-ID: <200103121351.IAA18642@cj20424-a.reston1.va.home.com>

> [GvR]
> > Actually, I intend to deprecate locals().  For now, globals() are
> > fine.  I also intend to deprecate vars(), at least in the form that is
> > equivalent to locals().

[Samuele]
> That's fine for me. Will that deprecation be already active with 2.1, e.g
> having locals() and param-less vars() raise a warning.

Hm, I hadn't thought of doing it right now.

> I imagine a (new) function that produce a snap-shot of the values in the
> local,free and cell vars of a scope can do the job required for simple 
> debugging (the copy will not allow to modify back the values), 
> or another approach...

Maybe.  I see two solutions: a function that returns a copy, or a
function that returns a "lazy mapping".  The former could be done as
follows given two scopes:

def namespace():
    d = __builtin__.__dict__.copy()
    d.update(globals())
    d.update(locals())
    return d

The latter like this:

def namespace():
    class C:
        def __init__(self, g, l):
            self.__g = g
            self.__l = l
        def __getitem__(self, key):
            try:
                return self.__l[key]
            except KeyError:
                try:
                    return self.__g[key]
                except KeyError:
                    return __builtin__.__dict__[key]
    return C(globals(), locals())

But of course they would have to work harder to deal with nested
scopes and cells etc.

I'm not sure if we should add this to 2.1 (if only because it's more
work than I'd like to put in this late in the game) and then I'm not
sure if we should deprecate locals() yet.

> In the meantime (if there's a meantime) is ok for jython to behave
> the way I have explained or not? 
> wrt to exec+locals()+global+nested scopes .

Sure.  You may even document it as one of the known differences.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Mon Mar 12 14:50:44 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:50:44 -0500
Subject: [Python-Dev] Re: Unifying Long Integers and Integers
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:25:23 +0200."
 <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il>
References: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
 <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il>
Message-ID: <200103121450.JAA19125@cj20424-a.reston1.va.home.com>

> [re: numerical PEPs mailing list] 
> > Please help yourself.  I recommend using SF since it requires less
> > overhead for the poor python.org sysadmins.
> 
> Err...I can't. Requesting an SF mailing list is an admin operation.

OK.  I won't make the request (too much going on still) so please ask
someone else at PythonLabs to do it.  Don't just sit there waiting for
one of us to read this mail and do it!

> What do you think? Should I issue a warning on shifting an integer so
> it would be cut/signed in the old semantics?

You'll have to, because the change in semantics will definitely break
some code.

> It also means overflow raises a different exception. Again, I suspect
> it will be used only in cases where the algorithm is supposed to maintain
> that internal results are not bigger then the inputs or things like that,
> and there only as a debugging aid -- so I don't think that this would be this
> bad. And if people want to avoid using the longs for performance reasons,
> then the implementation should definitely *not* lie to them.

It's not clear that using something derived from the machine word size
is the most helpful here.  Maybe a separate integral type that has a
limited range should be used for this.

> OK, so what's the concrete advice?

Propose both alternatives in the PEP.  It's too early to make
decisions -- first we need to have a catalog of our options, and their
consequences.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Mon Mar 12 14:52:20 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:52:20 -0500
Subject: [Python-Dev] Re: Non-integer Division
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:36:58 +0200."
 <20010312003658.01096AA27@darjeeling.zadka.site.co.il>
References: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
 <20010312003658.01096AA27@darjeeling.zadka.site.co.il>
Message-ID: <200103121452.JAA19139@cj20424-a.reston1.va.home.com>

> > > // Operator
> > 
> > Note: we could wind up using a different way to spell this operator,
> > e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
> > introduces a new reserved word, with all the issues it creates.  The
> > disadvantage of '//' is that it means something very different to Java
> > and C++ users.
> 
> I have zero (0) intuition about what is better. You choose --- I have
> no opinions on this. If we do go the "div" route, I need to also think
> up a syntactic migration path once I figure out the parsing issues
> involved. This isn't an argument -- just something you might want to 
> consider before pronouncing on "div".

As I said in the other thread, it's too early to make the decision --
just present both options in the PEP, and arguments pro/con for each.

> > Maybe for compatibility of bytecode files we should come up with a
> > better name, e.g. FLOAT_DIV?
> 
> Hmmmm.....a bytecode files so far have failed to be compatible for
> any revision. I have no problems with that, just that I feel that if
> we're serious about comptability, we should say so, and if we're not,
> then half-assed measures will not help.

Fair enough.

> [re: from __future__ import non_integer_division] 
> > I find "non_integer_division" rather long.  Maybe it should be called
> > "float_division"?
> 
> I have no problems with that -- except that if the rational PEP is accepted,
> then this would rational_integer_division, and I didn't want to commit
> myself yet.

Understood.

> You haven't commented yet about the rational PEP, so I don't know if that's
> even an option.

Yes I have, but in summary, I still think rationals are a bad idea.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From moshez@zadka.site.co.il  Mon Mar 12 14:55:31 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 16:55:31 +0200 (IST)
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <3AACC0B1.4AD48247@ActiveState.com>
References: <3AACC0B1.4AD48247@ActiveState.com>, <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <20010312145531.649E1AA27@darjeeling.zadka.site.co.il>

On Mon, 12 Mar 2001, Paul Prescod <paulp@ActiveState.com> wrote:

> Whether or not Python adopts rationals as the default number type, a
> rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> 2.2.

OK, how about this:

1. I remove the "literals" part from my PEP to another PEP
2. I add to rational() an ability to take strings, such as "1.3" and 
   make rationals out of them

Does anyone have any objections to

a. doing that
b. the PEP that would result from 1+2
?

I even volunteer to code the first prototype.
 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From guido@digicool.com  Mon Mar 12 14:57:31 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:57:31 -0500
Subject: [Python-Dev] Re: Adding a Rational Type to Python
In-Reply-To: Your message of "Mon, 12 Mar 2001 03:00:25 +0200."
 <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il>
References: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
 <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il>
Message-ID: <200103121457.JAA19188@cj20424-a.reston1.va.home.com>

> > Question: the time module's time() function currently returns a
> > float.  Should it return a rational instead?  This is a trick question.
> 
> It should return the most exact number the underlying operating system
> supports. For example, in OSes supporting gettimeofday, return a rational
> built from tv_sec and tv_usec.

I told you it was a trick question. :-)

Time may be *reported* in microseconds, but it's rarely *accurate* to
microseconds.  Because the precision is unclear, I think a float is
more appropriate here.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp@ActiveState.com  Mon Mar 12 15:09:37 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 07:09:37 -0800
Subject: [Python-Dev] Adding a Rational Type to Python
References: <3AACC0B1.4AD48247@ActiveState.com>, <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il> <20010312145531.649E1AA27@darjeeling.zadka.site.co.il>
Message-ID: <3AACE6B1.A599279D@ActiveState.com>

Moshe Zadka wrote:
> 
> On Mon, 12 Mar 2001, Paul Prescod <paulp@ActiveState.com> wrote:
> 
> > Whether or not Python adopts rationals as the default number type, a
> > rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> > 2.2.
> 
> OK, how about this:
> 
> 1. I remove the "literals" part from my PEP to another PEP
> 2. I add to rational() an ability to take strings, such as "1.3" and
>    make rationals out of them

+1

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From guido@digicool.com  Mon Mar 12 15:09:15 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 10:09:15 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Your message of "Mon, 12 Mar 2001 04:27:29 PST."
 <3AACC0B1.4AD48247@ActiveState.com>
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
 <3AACC0B1.4AD48247@ActiveState.com>
Message-ID: <200103121509.KAA19299@cj20424-a.reston1.va.home.com>

> Whether or not Python adopts rationals as the default number type, a
> rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> 2.2.
> 
> I think that Python users should be allowed to experiment with it before
> it becomes the default. If I recode my existing programs to use
> rationals and they experience an exponential slow-down, that might
> influence my recommendation to Guido. 

Excellent idea.  Moshe is already biting:

[Moshe]
> On Mon, 12 Mar 2001, Paul Prescod <paulp@ActiveState.com> wrote:
> 
> > Whether or not Python adopts rationals as the default number type, a
> > rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> > 2.2.
> 
> OK, how about this:
> 
> 1. I remove the "literals" part from my PEP to another PEP
> 2. I add to rational() an ability to take strings, such as "1.3" and 
>    make rationals out of them
> 
> Does anyone have any objections to
> 
> a. doing that
> b. the PEP that would result from 1+2
> ?
> 
> I even volunteer to code the first prototype.

I think that would make it a better PEP, and I recommend doing this,
because nothing can be so convincing as a working prototype!

Even so, I'm not sure that rational() should be added to the standard
set of built-in functions, but I'm much less opposed this than I am
against making 0.5 or 1/2 return a rational.  After all we have
complex(), so there's certainly a case to be made for rational().

Note: if you call it fraction() instead, it may appeal more to the
educational crowd!  (In grade school, we learn fractions; not until
late in high school do we learn that mathematicials call fractions
rationals.  It's the same as Randy Paush's argument about what to call
a quarter turn: not 90 degrees, not pi/2, just call it 1/4 turn. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From thomas@xs4all.net  Mon Mar 12 15:55:12 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 16:55:12 +0100
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <200103121509.KAA19299@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 10:09:15AM -0500
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il> <3AACC0B1.4AD48247@ActiveState.com> <200103121509.KAA19299@cj20424-a.reston1.va.home.com>
Message-ID: <20010312165512.S404@xs4all.nl>

On Mon, Mar 12, 2001 at 10:09:15AM -0500, Guido van Rossum wrote:

> Note: if you call it fraction() instead, it may appeal more to the
> educational crowd!  (In grade school, we learn fractions; not until
> late in high school do we learn that mathematicials call fractions
> rationals.  It's the same as Randy Paush's argument about what to call
> a quarter turn: not 90 degrees, not pi/2, just call it 1/4 turn. :-)

+1 on fraction(). +0 on making it a builtin instead of a separate module.
(I'm not nearly as worried about adding builtins as I am with adding
keywords <wink>)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Samuele Pedroni <pedroni@inf.ethz.ch>  Mon Mar 12 16:47:22 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Mon, 12 Mar 2001 17:47:22 +0100 (MET)
Subject: [Python-Dev] about sparse inputs from the jython userbase & types, language extensions
Message-ID: <200103121647.RAA15331@core.inf.ethz.ch>

Hi.

What follows is maybe to abstract or naive to be useful, if reading this is 
waste of time: sorry.
Further I ignore the content of the P3K kick-start session...

"We" are planning to add many features to python. It has also
been explicitly written that this is for the developers to have fun too ;).

Exact arithmetic, behind the scene promotion on overflow, etc...
nested scopes, iterators

A bit joking: lim(t->oo) python ~ Common Lisp

Ok, in python programs and data are not that much the same,
we don't have CL macros (but AFAIK dylan is an example of a language
without data&programs having the same structure but with CL-like macros , so 
maybe...), and "we" are not as masochistic as a commitee can be, and we
have not the all the history that CL should carry.

Python does not have (by now) optional static typing (CL has such a beast, 
everybody knows), but this is always haunting around, mainly for documentation
and error checking purposes.

Many of the proposals also go in the direction of making life easier
for newbies, even for programming newbies...
(this is not a paradox, a regular and well chosen subset of CL can
be appopriate for them and the world knows a beast called scheme).

Joke: making newbie happy is dangerous, then they will never want
to learn C ;)

The point: what is some (sparse) part of jython user base asking for?

1. better java intergration (for sure).
2. p-e-r-f-o-r-m-a-n-c-e

They ask why is jython so slow, why it does not exploit unboxed int or float
(the more informed one),
whether it is not possible to translate jython to java achieving performance...

The python answer about performance is:
- Think, you don't really need it,
- find the hotspot and code it in C,
- programmer speed is more important than pure program speed,
- python is just a glue language
Jython one is not that different.

If someone comes from C or much java this is fair.
For the happy newbie that's deceiving. (And can become
frustrating even for the experienced open-source programmer
 that wants to do more in less time: be able to do as much things
 as possible in python would be nice <wink>).

If python importance will increase IMHO this will become a real issue
(also from java, people is always asking for more performance).

Let some software house give them the right amount of perfomance  and dynamism
out of python for $xK (that what happens nowaday with CL), even more deceiving.

(I'm aware that dealing with that, also from a purely code complexity viewpoint,
may be too much for an open project in term of motivation)

regards, Samuele Pedroni.

PS: I'm aware of enough theoretical approaches to performance to know
that optional typing is just one of the possible, the point is that
performance as an issue should not be underestimated.



From Samuele Pedroni <pedroni@inf.ethz.ch>  Mon Mar 12 20:23:25 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Mon, 12 Mar 2001 21:23:25 +0100 (MET)
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
Message-ID: <200103122023.VAA20984@core.inf.ethz.ch>

Hi.

[GvR]
> > I imagine a (new) function that produce a snap-shot of the values in the
> > local,free and cell vars of a scope can do the job required for simple 
> > debugging (the copy will not allow to modify back the values), 
> > or another approach...
> 
> Maybe.  I see two solutions: a function that returns a copy, or a
> function that returns a "lazy mapping".  The former could be done as
> follows given two scopes:
> 
> def namespace():
>     d = __builtin__.__dict__.copy()
>     d.update(globals())
>     d.update(locals())
>     return d
> 
> The latter like this:
> 
> def namespace():
>     class C:
>         def __init__(self, g, l):
>             self.__g = g
>             self.__l = l
>         def __getitem__(self, key):
>             try:
>                 return self.__l[key]
>             except KeyError:
>                 try:
>                     return self.__g[key]
>                 except KeyError:
>                     return __builtin__.__dict__[key]
>     return C(globals(), locals())
> 
> But of course they would have to work harder to deal with nested
> scopes and cells etc.
> 
> I'm not sure if we should add this to 2.1 (if only because it's more
> work than I'd like to put in this late in the game) and then I'm not
> sure if we should deprecate locals() yet.
But in any case we would need something similar to repair pdb,
this independently of locals deprecation...

Samuele.



From thomas@xs4all.net  Mon Mar 12 21:04:31 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 22:04:31 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
Message-ID: <20010312220425.T404@xs4all.nl>

Contrary to Guido's keynote last week <wink> there are still two warts I
know of in the current CPython. One is the fact that keywords cannot be used
as identifiers anywhere, the other is the fact that 'continue' can still not
be used inside a 'finally' clause. If I remember correctly, the latter isn't
too hard to fix, it just needs a decision on what it should do :)

Currently, falling out of a 'finally' block will reraise the exception, if
any. Using 'return' and 'break' will drop the exception and continue on as
usual. However, that makes sense (imho) mostly because 'break' will continue
past the try/finally block and 'return' will break out of the function
altogether. Neither have a chance of reentering the try/finally block
altogether. I'm not sure if that would make sense for 'continue' inside
'finally'.

On the other hand, I'm not sure if it makes sense for 'break' to continue
but for 'continue' to break. :)

As for the other wart, I still want to fix it, but I'm not sure when I get
the chance to grok the parser-generator enough to actually do it :) 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From msw@redhat.com  Mon Mar 12 21:47:05 2001
From: msw@redhat.com (Matt Wilson)
Date: Mon, 12 Mar 2001 16:47:05 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
Message-ID: <20010312164705.C641@devserv.devel.redhat.com>

We've been auditing various code lately to check for /tmp races and so
on.  It seems that tempfile.mktemp() is used throughout the Python
library.  While nice and portable, tempfile.mktemp() is vulnerable to
races.

The TemporaryFile does a nice job of handling the filename returned by
mktemp properly, but there are many modules that don't.

Should I attempt to patch them all to use TemporaryFile?  Or set up
conditional use of mkstemp on those systems that support it?

Cheers,

Matt
msw@redhat.com


From DavidA@ActiveState.com  Mon Mar 12 22:01:02 2001
From: DavidA@ActiveState.com (David Ascher)
Date: Mon, 12 Mar 2001 14:01:02 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
Message-ID: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com>

With apologies for the delay, here are my notes from the numeric coercion
day.

There were many topics which were defined by the Timbot to be within the
scope of the discussion.  Those included:

  - Whether numbers should be rationals / binary FP / decimal FP / etc.
  - Whether there should be support for both exact and inexact computations
  - What division means.

There were few "deliverables" at the end of the day, mostly a lot of
consternation on all sides of the multi-faceted divide, with the impression
in at least this observer's mind that there are few things more
controversial than what numbers are for and how they should work.  A few
things emerged, however:

  0) There is tension between making math in Python 'understandable' to a
high-school kid and making math in Python 'useful' to an engineer/scientist.

  1) We could consider using the new warnings framework for noting things
which are "dangerous" to do with numbers, such as:

       - noting that an operation on 'plain' ints resulted in a 'long'
result.
       - using == when comparing floating point numbers

  2) The Fortran notion of "Kind" as an orthogonal notion to "Type" may make
sense (details to be fleshed out).

  3) Pythonistas are good at quotes:

     "You cannot stop people from complaining, but you can influence what
they
      complain about." - Tim Peters

     "The only problem with using rationals for money is that money, is,
      well, not rational." - Moshe Zadka

     "Don't get too apoplectic about this." - Tim Peters

  4) We all agreed that "2" + "23" will not equal "25".

--david ascher



From Greg.Wilson@baltimore.com  Mon Mar 12 22:29:31 2001
From: Greg.Wilson@baltimore.com (Greg Wilson)
Date: Mon, 12 Mar 2001 17:29:31 -0500
Subject: [Python-Dev] more Solaris extension grief
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC593@nsamcanms1.ca.baltimore.com>

I just updated my copy of Python from the CVS repo,
rebuilt on Solaris 5.8, and tried to compile an
extension that is built on top of C++.  I am now
getting lots 'n' lots of error messages as shown
below.  My compile line is:

gcc -shared  ./PyEnforcer.o  -L/home/gvwilson/cozumel/merlot/enforcer
-lenforcer -lopenssl -lstdc++  -o ./PyEnforcer.so

Has anyone seen this problem before?  It does *not*
occur on Linux, using the same version of g++.

Greg

p.s. I configured Python --with-gcc=g++

Text relocation remains                         referenced
    against symbol                  offset      in file
istream type_info function          0x1c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
istream type_info function          0x18
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdiostream.o
)
_IO_stderr_buf                      0x2c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_stderr_buf                      0x28
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_default_xsputn                  0xc70
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
_IO_default_xsputn                  0xa4
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(streambuf.o)
lseek                               0xa74
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
_IO_str_init_readonly               0x620
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
_IO_stdout_buf                      0x24
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_stdout_buf                      0x38
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_file_xsputn                     0x43c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filebuf.o)
fstat                               0xa8c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
streambuf::sputbackc(char)          0x68c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x838
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x8bc
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x1b4c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x1b80
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x267c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x26f8
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
_IO_file_stat                       0x40c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filebuf.o)
_IO_setb                            0x844
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(genops.o)
_IO_setb                            0x210
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strops.o)
_IO_setb                            0xa8
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filedoalloc.o
)
... and so on and so on ...


From barry@digicool.com  Mon Mar 12 23:15:15 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:15:15 -0500
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
 <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103120711.AAA09711@localhost.localdomain>
Message-ID: <15021.22659.616556.298360@anthem.wooz.org>

>>>>> "UO" == Uche Ogbuji <uche.ogbuji@fourthought.com> writes:

    UO> I know this isn't the types SIG and all, but since it has come
    UO> up here, I'd like to (once again) express my violent
    UO> disagreement with the efforts to add static typing to Python.
    UO> After this, I won't pursue the thread further here.

Thank you Uche!  I couldn't agree more, and will also try to follow
your example, at least until we see much more concrete proposals from
the types-sig.  I just want to make a few comments for the record.

First, it seemed to me that the greatest push for static type
annotations at IPC9 was from the folks implementing Python on top of
frameworks other than C.  I know from my own experiences that there is
the allure of improved performance, e.g. JPython, given type hints
available to the compiler.  While perhaps a laudable goal, this
doesn't seem to be a stated top priority of Paul's.

Second, if type annotations are to be seriously considered for
inclusion in Python, I think we as a community need considerable
experience with a working implementation.  Yes, we need PEPs and specs
and such, but we need something real and complete that we can play
with, /without/ having to commit to its acceptance in mainstream
Python.  Therefore, I think it'll be very important for type
annotation proponents to figure out a way to allow people to see and
play with an implementation in an experimental way.

This might mean an extensive set of patches, a la Stackless.  After
seeing and talking to Neil and Andrew about PTL and Quixote, I think
there might be another way.  It seems that their approach might serve
as a framework for experimental Python syntaxes with minimal overhead.
If I understand their work correctly, they have their own compiler
which is built on Jeremy's tools, and which accepts a modified Python
grammar, generating different but compatible bytecode sequences.
E.g., their syntax has a "template" keyword approximately equivalent
to "def" and they do something different with bare strings left on the
stack.

The key trick is that it all hooks together with an import hook so
normal Python code doesn't need to know anything about the mechanics
of PTL compilation.  Given a homepage.ptl file, they just do an
"import homepage" and this gets magically transformed into a .ptlc
file and normal Python objects.

If I've got this correct, it seems like it would be a powerful tool
for playing with alternative Python syntaxes.  Ideally, the same
technique would allow the types-sig folks to create a working
implementation that would require only the installation of an import
hook.  This would let them build their systems with type annotation
and prove to the skeptical among us of their overwhelming benefit.

Cheers,
-Barry


From guido@digicool.com  Mon Mar 12 23:19:39 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:19:39 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Mon, 12 Mar 2001 14:01:02 PST."
 <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com>
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com>
Message-ID: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>

> With apologies for the delay, here are my notes from the numeric coercion
> day.
> 
> There were many topics which were defined by the Timbot to be within the
> scope of the discussion.  Those included:
> 
>   - Whether numbers should be rationals / binary FP / decimal FP / etc.
>   - Whether there should be support for both exact and inexact computations
>   - What division means.
> 
> There were few "deliverables" at the end of the day, mostly a lot of
> consternation on all sides of the multi-faceted divide, with the impression
> in at least this observer's mind that there are few things more
> controversial than what numbers are for and how they should work.  A few
> things emerged, however:
> 
>   0) There is tension between making math in Python 'understandable' to a
> high-school kid and making math in Python 'useful' to an engineer/scientist.
> 
>   1) We could consider using the new warnings framework for noting things
> which are "dangerous" to do with numbers, such as:
> 
>        - noting that an operation on 'plain' ints resulted in a 'long'
> result.
>        - using == when comparing floating point numbers
> 
>   2) The Fortran notion of "Kind" as an orthogonal notion to "Type" may make
> sense (details to be fleshed out).
> 
>   3) Pythonistas are good at quotes:
> 
>      "You cannot stop people from complaining, but you can influence what
> they
>       complain about." - Tim Peters
> 
>      "The only problem with using rationals for money is that money, is,
>       well, not rational." - Moshe Zadka
> 
>      "Don't get too apoplectic about this." - Tim Peters
> 
>   4) We all agreed that "2" + "23" will not equal "25".
> 
> --david ascher

Thanks for the notes.  I couldn't be at the meeting, but I attended a
post-meeting lunch roundtable, where much of the above confusion was
reiterated for my convenience.  Moshe's three or four PEPs also came
out of that.  One thing we *could* agree to there, after I pressed
some people: 1/2 should return 0.5.  Possibly 1/2 should not be a
binary floating point number -- but then 0.5 shouldn't either, and
whatever happens, these (1/2 and 0.5) should have the same type, be it
rational, binary float, or decimal float.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Mon Mar 12 23:23:06 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:23:06 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: Your message of "Mon, 12 Mar 2001 16:47:05 EST."
 <20010312164705.C641@devserv.devel.redhat.com>
References: <20010312164705.C641@devserv.devel.redhat.com>
Message-ID: <200103122323.SAA22876@cj20424-a.reston1.va.home.com>

> We've been auditing various code lately to check for /tmp races and so
> on.  It seems that tempfile.mktemp() is used throughout the Python
> library.  While nice and portable, tempfile.mktemp() is vulnerable to
> races.
> 
> The TemporaryFile does a nice job of handling the filename returned by
> mktemp properly, but there are many modules that don't.
> 
> Should I attempt to patch them all to use TemporaryFile?  Or set up
> conditional use of mkstemp on those systems that support it?

Matt, please be sure to look at the 2.1 CVS tree.  I believe that
we've implemented some changes that may make mktemp() better behaved.

If you find that this is still not good enough, please feel free to
submit a patch to SourceForge that fixes the uses of mktemp() --
insofar possible.  (I know e.g. the test suite has some places where
mktemp() is used as the name of a dbm file.)

Thanks for looking into this!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From esr@snark.thyrsus.com  Mon Mar 12 23:36:00 2001
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Mon, 12 Mar 2001 18:36:00 -0500
Subject: [Python-Dev] CML2 compiler slowness
Message-ID: <200103122336.f2CNa0W28998@snark.thyrsus.com>

(Copied to python-dev for informational purposes.)

I added some profiling apparatus to the CML2 compiler and investigated
mec's reports of a twenty-second startup.  I've just released the
version with profiling as 0.9.3, with fixes for all known bugs.

Nope, it's not the quadratic-time validation pass that's eating all
the cycles.  It's the expression parser I generated with John
Aycock's SPARK toolkit -- that's taking up an average of 26 seconds
out of an average 28-second runtime.

While I was at PC9 last week somebody mumbled something about Aycock's
code being cubic in time.  I should have heard ominous Jaws-style
theme music at that point, because that damn Earley-algorithm parser
has just swum up from the deeps and bitten me on the ass.

Looks like I'm going to have to hand-code an expression parser for
this puppy to speed it up at all.  *groan*  Anybody over on the Python
side know of a faster alternative LL or LR(1) parser generator or
factory class?
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

It will be of little avail to the people, that the laws are made by
men of their own choice, if the laws be so voluminous that they cannot
be read, or so incoherent that they cannot be understood; if they be
repealed or revised before they are promulgated, or undergo such
incessant changes that no man, who knows what the law is to-day, can
guess what it will be to-morrow. Law is defined to be a rule of
action; but how can that be a rule, which is little known, and less
fixed?
	-- James Madison, Federalist Papers 62


From guido@digicool.com  Mon Mar 12 23:32:37 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:32:37 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: Your message of "Mon, 12 Mar 2001 22:04:31 +0100."
 <20010312220425.T404@xs4all.nl>
References: <20010312220425.T404@xs4all.nl>
Message-ID: <200103122332.SAA22948@cj20424-a.reston1.va.home.com>

> Contrary to Guido's keynote last week <wink> there are still two warts I
> know of in the current CPython. One is the fact that keywords cannot be used
> as identifiers anywhere, the other is the fact that 'continue' can still not
> be used inside a 'finally' clause. If I remember correctly, the latter isn't
> too hard to fix, it just needs a decision on what it should do :)
> 
> Currently, falling out of a 'finally' block will reraise the exception, if
> any. Using 'return' and 'break' will drop the exception and continue on as
> usual. However, that makes sense (imho) mostly because 'break' will continue
> past the try/finally block and 'return' will break out of the function
> altogether. Neither have a chance of reentering the try/finally block
> altogether. I'm not sure if that would make sense for 'continue' inside
> 'finally'.
> 
> On the other hand, I'm not sure if it makes sense for 'break' to continue
> but for 'continue' to break. :)

If you can fix it, the semantics you suggest are reasonable: continue
loses the exception and continues the loop.

> As for the other wart, I still want to fix it, but I'm not sure when I get
> the chance to grok the parser-generator enough to actually do it :) 

Yes, that was on the list once but got dropped.  You might want to get
together with Finn and Samuele to see what their rules are.  (They
allow the use of some keywords at least as keyword=expression
arguments and as object.attribute names.)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Mon Mar 12 23:41:01 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:41:01 -0500
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: Your message of "Mon, 12 Mar 2001 18:15:15 EST."
 <15021.22659.616556.298360@anthem.wooz.org>
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain>
 <15021.22659.616556.298360@anthem.wooz.org>
Message-ID: <200103122341.SAA23054@cj20424-a.reston1.va.home.com>

> >>>>> "UO" == Uche Ogbuji <uche.ogbuji@fourthought.com> writes:
> 
>     UO> I know this isn't the types SIG and all, but since it has come
>     UO> up here, I'd like to (once again) express my violent
>     UO> disagreement with the efforts to add static typing to Python.
>     UO> After this, I won't pursue the thread further here.
> 
> Thank you Uche!  I couldn't agree more, and will also try to follow
> your example, at least until we see much more concrete proposals from
> the types-sig.  I just want to make a few comments for the record.

Barry, you were supposed to throw a brick at me with this content at
the meeting, on Eric's behalf.  Why didn't you?  I was waiting for
someone to explain why this was a big idea, but everybody kept their
face shut!  :-(

> First, it seemed to me that the greatest push for static type
> annotations at IPC9 was from the folks implementing Python on top of
> frameworks other than C.  I know from my own experiences that there is
> the allure of improved performance, e.g. JPython, given type hints
> available to the compiler.  While perhaps a laudable goal, this
> doesn't seem to be a stated top priority of Paul's.
> 
> Second, if type annotations are to be seriously considered for
> inclusion in Python, I think we as a community need considerable
> experience with a working implementation.  Yes, we need PEPs and specs
> and such, but we need something real and complete that we can play
> with, /without/ having to commit to its acceptance in mainstream
> Python.  Therefore, I think it'll be very important for type
> annotation proponents to figure out a way to allow people to see and
> play with an implementation in an experimental way.

+1

> This might mean an extensive set of patches, a la Stackless.  After
> seeing and talking to Neil and Andrew about PTL and Quixote, I think
> there might be another way.  It seems that their approach might serve
> as a framework for experimental Python syntaxes with minimal overhead.
> If I understand their work correctly, they have their own compiler
> which is built on Jeremy's tools, and which accepts a modified Python
> grammar, generating different but compatible bytecode sequences.
> E.g., their syntax has a "template" keyword approximately equivalent
> to "def" and they do something different with bare strings left on the
> stack.

I'm not sure this is viable.  I believe Jeremy's compiler package
actually doesn't have its own parser -- it uses the parser module
(which invokes Python's standard parse) and then transmogrifies the
parse tree into something more usable, but it doesn't change the
syntax!  Quixote can get away with this because their only change
is giving a different meaning to stand-alone string literals.  But for
type annotations this doesn't give enough freedom, I expect.

> The key trick is that it all hooks together with an import hook so
> normal Python code doesn't need to know anything about the mechanics
> of PTL compilation.  Given a homepage.ptl file, they just do an
> "import homepage" and this gets magically transformed into a .ptlc
> file and normal Python objects.

That would be nice, indeed.

> If I've got this correct, it seems like it would be a powerful tool
> for playing with alternative Python syntaxes.  Ideally, the same
> technique would allow the types-sig folks to create a working
> implementation that would require only the installation of an import
> hook.  This would let them build their systems with type annotation
> and prove to the skeptical among us of their overwhelming benefit.

+1

--Guido van Rossum (home page: http://www.python.org/~guido/)


From thomas@xs4all.net  Mon Mar 12 23:47:14 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 00:47:14 +0100
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:19:39PM -0500
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <20010313004714.U404@xs4all.nl>

On Mon, Mar 12, 2001 at 06:19:39PM -0500, Guido van Rossum wrote:

> One thing we *could* agree to [at lunch], after I pressed
> some people: 1/2 should return 0.5. Possibly 1/2 should not be a
> binary floating point number -- but then 0.5 shouldn't either, and
> whatever happens, these (1/2 and 0.5) should have the same type, be it
> rational, binary float, or decimal float.

Actually, I didn't quite agree, and still don't quite agree (I'm just not
happy with this 'automatic upgrading of types') but I did agreed to differ
in opinion and bow to your wishes ;) I did agree that if 1/2 should not
return 0, it should return 0.5 (an object of the same type as
0.5-the-literal.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@digicool.com  Mon Mar 12 23:48:00 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:48:00 -0500
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: Your message of "Mon, 12 Mar 2001 18:41:01 EST."
 <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org>
 <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
Message-ID: <200103122348.SAA23123@cj20424-a.reston1.va.home.com>

> Barry, you were supposed to throw a brick at me with this content at
> the meeting, on Eric's behalf.  Why didn't you?  I was waiting for
> someone to explain why this was a big idea, but everybody kept their
                                    ^^^^^^^^
> face shut!  :-(

/big idea/ -> /bad idea/ :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)


From barry@digicool.com  Mon Mar 12 23:48:21 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:48:21 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl>
 <200103122332.SAA22948@cj20424-a.reston1.va.home.com>
Message-ID: <15021.24645.357064.856281@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

    GvR> Yes, that was on the list once but got dropped.  You might
    GvR> want to get together with Finn and Samuele to see what their
    GvR> rules are.  (They allow the use of some keywords at least as
    GvR> keyword=expression arguments and as object.attribute names.)

I'm actually a little surprised that the "Jython vs. CPython"
differences page doesn't describe this (or am I missing it?):

    http://www.jython.org/docs/differences.html

I thought it used to.

IIRC, keywords were allowed if there was no question of it introducing
a statement.  So yes, keywords were allowed after the dot in attribute
lookups, and as keywords in argument lists, but not as variable names
on the lhs of an assignment (I don't remember if they were legal on
the rhs, but it seems like that ought to be okay, and is actually
necessary if you allow them argument lists).

It would eliminate much of the need for writing obfuscated code like
"class_" or "klass".

-Barry


From barry@digicool.com  Mon Mar 12 23:52:57 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:52:57 -0500
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
 <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103120711.AAA09711@localhost.localdomain>
 <15021.22659.616556.298360@anthem.wooz.org>
 <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
Message-ID: <15021.24921.998693.156809@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

    GvR> Barry, you were supposed to throw a brick at me with this
    GvR> content at the meeting, on Eric's behalf.  Why didn't you?  I
    GvR> was waiting for someone to explain why this was a big idea,
    GvR> but everybody kept their face shut!  :-(

I actually thought I had, but maybe it was a brick made of bouncy spam
instead of concrete. :/

    GvR> I'm not sure this is viable.  I believe Jeremy's compiler
    GvR> package actually doesn't have its own parser -- it uses the
    GvR> parser module (which invokes Python's standard parse) and
    GvR> then transmogrifies the parse tree into something more
    GvR> usable, but it doesn't change the syntax!  Quixote can get
    GvR> away with this because their only change is giving a
    GvR> different meaning to stand-alone string literals.  But for
    GvR> type annotations this doesn't give enough freedom, I expect.

I thought PTL definitely included a "template" declaration keyword, a
la, def, so they must have some solution here.  MEMs guys?

-Barry


From thomas@xs4all.net  Tue Mar 13 00:01:45 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 01:01:45 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15021.24645.357064.856281@anthem.wooz.org>; from barry@digicool.com on Mon, Mar 12, 2001 at 06:48:21PM -0500
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org>
Message-ID: <20010313010145.V404@xs4all.nl>

On Mon, Mar 12, 2001 at 06:48:21PM -0500, Barry A. Warsaw wrote:
> >>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

>     GvR> Yes, that was on the list once but got dropped.  You might
>     GvR> want to get together with Finn and Samuele to see what their
>     GvR> rules are.  (They allow the use of some keywords at least as
>     GvR> keyword=expression arguments and as object.attribute names.)

> I'm actually a little surprised that the "Jython vs. CPython"
> differences page doesn't describe this (or am I missing it?):

Nope, it's not in there. It should be under the Syntax heading.

>     http://www.jython.org/docs/differences.html

Funnily enough:

"Jython supports continue in a try clause. CPython should be fixed - but
don't hold your breath."

It should be updated for CPython 2.1 when it's released ? :-)

[*snip* how Barry thinks he remembers how Jython might handle keywords]

> It would eliminate much of the need for writing obfuscated code like
> "class_" or "klass".

Yup. That's one of the reasons I brought it up. (That, and Mark mentioned
it's actually necessary for .NET Python to adhere to 'the spec'.)

Holding-my-breath-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nas@arctrix.com  Tue Mar 13 00:07:30 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Mon, 12 Mar 2001 16:07:30 -0800
Subject: [Python-Dev] parsers and import hooks [Was: Revive the types sig?]
In-Reply-To: <200103122341.SAA23054@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:41:01PM -0500
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org> <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
Message-ID: <20010312160729.A2976@glacier.fnational.com>

[Recipient addresses brutally slashed.]

On Mon, Mar 12, 2001 at 06:41:01PM -0500, Guido van Rossum wrote:
> I'm not sure this is viable.  I believe Jeremy's compiler package
> actually doesn't have its own parser -- it uses the parser module
> (which invokes Python's standard parse) and then transmogrifies the
> parse tree into something more usable, but it doesn't change the
> syntax!

Yup.  Having a more flexible Python-like parser would be cool but
I don't think I'd ever try to implement it.  I know Christian
Tismer wants one.  Maybe he will volunteer. :-)

[On using import hooks to load modules with modified syntax/semantics]
> That would be nice, indeed.

Its nice if you can get it to work.  import hooks are a bitch to
write and are slow.  Also, you get trackbacks from hell.  It
would be nice if there were higher level hooks in the
interpreter.  imputil.py did no do the trick for me after
wrestling with it for hours.

  Neil


From nkauer@users.sourceforge.net  Tue Mar 13 00:09:10 2001
From: nkauer@users.sourceforge.net (Nikolas Kauer)
Date: Mon, 12 Mar 2001 18:09:10 -0600 (CST)
Subject: [Python-Dev] syntax exploration tool
In-Reply-To: <15021.22659.616556.298360@anthem.wooz.org>
Message-ID: <Pine.LNX.4.10.10103121801530.7351-100000@falcon.physics.wisc.edu>

I'd volunteer to put in time and help create such a tool.  If someone 
sufficiently knowledgeable decides to go ahead with such a project 
please let me know.

---
Nikolas Kauer <nkauer@users.sourceforge.net>

> Second, if type annotations are to be seriously considered for
> inclusion in Python, I think we as a community need considerable
> experience with a working implementation.  Yes, we need PEPs and specs
> and such, but we need something real and complete that we can play
> with, /without/ having to commit to its acceptance in mainstream
> Python.  Therefore, I think it'll be very important for type
> annotation proponents to figure out a way to allow people to see and
> play with an implementation in an experimental way.



From nas@arctrix.com  Tue Mar 13 00:13:04 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Mon, 12 Mar 2001 16:13:04 -0800
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <15021.24921.998693.156809@anthem.wooz.org>; from barry@digicool.com on Mon, Mar 12, 2001 at 06:52:57PM -0500
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org> <200103122341.SAA23054@cj20424-a.reston1.va.home.com> <15021.24921.998693.156809@anthem.wooz.org>
Message-ID: <20010312161304.B2976@glacier.fnational.com>

On Mon, Mar 12, 2001 at 06:52:57PM -0500, Barry A. Warsaw wrote:
> I thought PTL definitely included a "template" declaration keyword, a
> la, def, so they must have some solution here.  MEMs guys?

The correct term is "hack".  We do a re.sub on the text of the
module.  I considered building a new parsermodule with def
changed to template but haven't had time yet.  I think the
dominate cost when importing a PTL module is due stat() calls
driven by hairy Python code.

  Neil


From jeremy@alum.mit.edu  Tue Mar 13 00:14:47 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 19:14:47 -0500 (EST)
Subject: [Python-Dev] comments on PEP 219
Message-ID: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>

Here are some comments on Gordon's new draft of PEP 219 and the
stackless dev day discussion at Spam 9.

I left the dev day discussion with the following takehome message:
There is a tension between Stackless Python on the one hand and making
Python easy to embed in and extend with C programs on the other hand.
The PEP describes this as the major difficulty with C Python.  I won't
repeat the discussion of the problem there.

I would like to seem a somewhat more detailed discussion of this in
the PEP.  I think it's an important issue to work out before making a
decision about a stack-light patch.

The problem of nested interpreters and the C API seems to come up in
several ways.  These are all touched on in the PEP, but not in much
detail.  This message is mostly a request for more detail :-).

  - Stackless disallows transfer out of a nested interpreter.  (It
    has, too; anything else would be insane.)  Therefore, the
    specification for microthreads &c. will be complicated by a
    listing of the places where control transfers are not possible.
    The PEP says this is not ideal, but not crippling.  I'd like to
    see an actual spec for where it's not allowed in pure Python.  It
    may not be crippling, but it may be a tremendous nuisance in
    practice; e.g. remember that __init__ calls create a critical
    section.

  - If an application makes use of C extensions that do create nested
    interpreters, they will make it even harder to figure out when
    Python code is executing in a nested interpreter.  For a large
    systems with several C extensions, this could be complicated.  I
    presume, therefore, that there will be a C API for playing nice
    with stackless.  I'd like to see a PEP that discusses what this C
    API would look like.

  - Would allow of the internal Python calls that create nested
    functions be replaced?  I'm thinking of things like
    PySequence_Fast() and the ternary_op() call in abstract.c.  How
    hard will it be to convert all these functions to be stackless?
    How many functions are affected?  And how many places are they
    called from?

  - What is the performance impact of adding the stackless patches?  I
    think Christian mentioned a 10% slowdown at dev day, which doesn't
    sound unreasonable.  Will reworking the entire interpreter to be
    stackless make that slowdown larger or smaller?

One other set of issues, that is sort-of out of bounds for this
particular PEP, is what control features do we want that can only be
implemented with stackless.  Can we implement generators or coroutines
efficiently without a stackless approach?

Jeremy


From aycock@csc.UVic.CA  Tue Mar 13 00:13:01 2001
From: aycock@csc.UVic.CA (John Aycock)
Date: Mon, 12 Mar 2001 16:13:01 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <200103130013.QAA13925@valdes.csc.UVic.CA>

| From esr@snark.thyrsus.com Mon Mar 12 15:14:33 2001
| It's the expression parser I generated with John
| Aycock's SPARK toolkit -- that's taking up an average of 26 seconds
| out of an average 28-second runtime.
|
| While I was at PC9 last week somebody mumbled something about Aycock's
| code being cubic in time.  I should have heard ominous Jaws-style
| theme music at that point, because that damn Earley-algorithm parser
| has just swum up from the deeps and bitten me on the ass.

Eric:

You were partially correctly informed.  The time complexity of Earley's
algorithm is O(n^3) in the worst case, that being the meanest, nastiest,
most ambiguous context-free grammar you could possibly think of.  Unless
you're parsing natural language, this won't happen.  For any unambiguous
grammar, the worst case drops to O(n^2), and for a set of grammars which
loosely coincides with the LR(k) grammars, the complexity drops to O(n).

In other words, it's linear for most programming language grammars.  Now
the overhead for a general parsing algorithm like Earley's is of course
greater than that of a much more specialized algorithm, like LALR(1).

The next version of SPARK uses some of my research work into Earley's
algorithm and improves the speed quite dramatically.  It's not all
ready to go yet, but I can send you my working version which will give
you some idea of how fast it'll be for CML2.  Also, I assume you're
supplying a typestring() method to the parser class?  That speeds things
up as well.

John


From jepler@inetnebr.com  Mon Mar 12 23:38:42 2001
From: jepler@inetnebr.com (Jeff Epler)
Date: Mon, 12 Mar 2001 17:38:42 -0600
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <15021.22659.616556.298360@anthem.wooz.org>
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org>
Message-ID: <20010312173842.A3962@potty.housenet>

On Mon, Mar 12, 2001 at 06:15:15PM -0500, Barry A. Warsaw wrote:
> This might mean an extensive set of patches, a la Stackless.  After
> seeing and talking to Neil and Andrew about PTL and Quixote, I think
> there might be another way.  It seems that their approach might serve
> as a framework for experimental Python syntaxes with minimal overhead.
> If I understand their work correctly, they have their own compiler
> which is built on Jeremy's tools, and which accepts a modified Python
> grammar, generating different but compatible bytecode sequences.
> E.g., their syntax has a "template" keyword approximately equivalent
> to "def" and they do something different with bare strings left on the
> stack.

See also my project, "M=F6bius python".[1]

I've used a lot of existing pieces, including the SPARK toolkit,
Tools/compiler, and Lib/tokenize.py.

The end result is a set of Python classes and functions that implement th=
e
whole tokenize/parse/build AST/bytecompile process.  To the extent that
each component is modifable or subclassable, Python's grammar and semanti=
cs
can be extended.  For example, new keywords and statement types can be
introduced (such as Quixote's 'tmpl'), new operators can be introduced
(such as |absolute value|), along with the associated semantics.

(At this time, there is only a limited potential to modify the tokenizer)

One big problem right now is that M=F6bius Python only implements the
1.5.2 language subset.

The CVS tree on sourceforge is not up to date, but the tree on my system =
is
pretty complete, lacking only documentation.  Unfortunately, even a small
modification requires a fair amount of code (My 'absolute value' extensio=
n
is 91 lines plus comments, empty lines, and imports)

As far as I know, all that Quixote does at the syntax level is a few
regular expression tricks.  M=F6bius Python is much more than this.

Jeff
[1] http://mobiuspython.sourceforge.net/


From tim.one@home.com  Tue Mar 13 01:14:34 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 12 Mar 2001 20:14:34 -0500
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDLJFAA.tim.one@home.com>

FYI, Fredrik's regexp engine also supports two undocumented match-object
attributes that could be used to speed SPARK lexing, and especially when
there are many token types (gives a direct index to the matching alternative
instead of making you do a linear search for it -- that can add up to a major
win).  Simple example below.

Python-Dev, this has been in there since 2.0 (1.6?  unsure).  I've been using
it happily all along.  If Fredrik is agreeable, I'd like to see this
documented for 2.1, i.e. made an officially supported part of Python's regexp
facilities.

-----Original Message-----
From: Tim Peters [mailto:tim.one@home.com]
Sent: Monday, March 12, 2001 6:37 PM
To: python-list@python.org
Subject: RE: Help with Regular Expressions

[Raymond Hettinger]
> Is there an idiom for how to use regular expressions for lexing?
>
> My attempt below is unsatisfactory because it has to filter the
> entire match group dictionary to find-out which token caused
> the match. This approach isn't scalable because every token
> match will require a loop over all possible token types.
>
> I've fiddled with this one for hours and can't seem to find a
> direct way get a group dictionary that contains only matches.

That's because there isn't a direct way; best you can do now is seek to order
your alternatives most-likely first (which is a good idea anyway, given the
way the engine works).

If you peek inside sre.py (2.0 or later), you'll find an undocumented class
Scanner that uses the undocumented .lastindex attribute of match objects.
Someday I hope this will be the basis for solving exactly the problem you're
facing.  There's also an undocumented .lastgroup attribute:

Python 2.1b1 (#11, Mar  2 2001, 11:23:29) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
IDLE 0.6 -- press F1 for help
>>> import re
>>> pat = re.compile(r"(?P<a>aa)|(?P<b>bb)")
>>> m = pat.search("baab")
>>> m.lastindex  # numeral of group that matched
1
>>> m.lastgroup  # name of group that matched
'a'
>>> m = pat.search("ababba")
>>> m.lastindex
2
>>> m.lastgroup
'b'
>>>

They're not documented yet because we're not yet sure whether we want to make
them permanent parts of the language.  So feel free to play, but don't count
on them staying around forever.  If you like them, drop a note to the effbot
saying so.

for-more-docs-read-the-source-code-ly y'rs  - tim



From paulp@ActiveState.com  Tue Mar 13 01:45:51 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 17:45:51 -0800
Subject: [Python-Dev] FOLLOWUPS!!!!!!!
References: <Pine.LNX.4.10.10103121801530.7351-100000@falcon.physics.wisc.edu>
Message-ID: <3AAD7BCF.4D4F69B7@ActiveState.com>

Please keep follow-ups to just types-sig. I'm very sorry I cross-posted
in the beginning and I apologize to everyone on multiple lists. I did
direct people to follow up only to types-sig but I should have used a
header....or separate posts!

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From ping@lfw.org  Tue Mar 13 01:56:27 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Mon, 12 Mar 2001 17:56:27 -0800 (PST)
Subject: [Python-Dev] parsers and import hooks
In-Reply-To: <20010312160729.A2976@glacier.fnational.com>
Message-ID: <Pine.LNX.4.10.10103121755110.13108-100000@skuld.kingmanhall.org>

On Mon, 12 Mar 2001, Neil Schemenauer wrote:
> 
> Its nice if you can get it to work.  import hooks are a bitch to
> write and are slow.  Also, you get trackbacks from hell.  It
> would be nice if there were higher level hooks in the
> interpreter.

Let me chime in with a request, please, for a higher-level find_module()
that understands packages -- or is there already some way to emulate the 
file-finding behaviour of "import x.y.z" that i don't know about?



-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso



From tim.one@home.com  Tue Mar 13 02:07:46 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 12 Mar 2001 21:07:46 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: <20010312164705.C641@devserv.devel.redhat.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>

[Matt Wilson]
> We've been auditing various code lately to check for /tmp races and so
> on.  It seems that tempfile.mktemp() is used throughout the Python
> library.  While nice and portable, tempfile.mktemp() is vulnerable to
> races.
> ...

Adding to what Guido said, the 2.1 mktemp() finally bites the bullet and uses
a mutex to ensure that no two threads (within a process) can ever generate
the same filename.  The 2.0 mktemp() was indeed subject to races in this
respect.  Freedom from cross-process races relies on using the pid in the
filename too.



From paulp@ActiveState.com  Tue Mar 13 02:18:13 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 18:18:13 -0800
Subject: [Python-Dev] CML2 compiler slowness
References: <200103122336.f2CNa0W28998@snark.thyrsus.com>
Message-ID: <3AAD8365.285CCCFE@ActiveState.com>

"Eric S. Raymond" wrote:
> 
> ...
> 
> Looks like I'm going to have to hand-code an expression parser for
> this puppy to speed it up at all.  *groan*  Anybody over on the Python
> side know of a faster alternative LL or LR(1) parser generator or
> factory class?

I tried to warn you about those Early-parsers. :)

  http://mail.python.org/pipermail/python-dev/2000-July/005321.html


Here are some pointers to other solutions:

Martel: http://www.biopython.org/~dalke/Martel

flex/bison: http://www.cs.utexas.edu/users/mcguire/software/fbmodule/

kwparsing: http://www.chordate.com/kwParsing/

mxTextTools: http://www.lemburg.com/files/python/mxTextTools.html

metalang: http://www.tibsnjoan.demon.co.uk/mxtext/Metalang.html

plex: http://www.cosc.canterbury.ac.nz/~greg/python/Plex/

pylr: http://starship.python.net/crew/scott/PyLR.html

SimpleParse: (offline?)

mcf tools: (offline?)

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From thomas@xs4all.net  Tue Mar 13 02:23:02 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 03:23:02 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Include frameobject.h,2.30,2.31
In-Reply-To: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>; from jhylton@usw-pr-web.sourceforge.net on Mon, Mar 12, 2001 at 05:58:23PM -0800
References: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <20010313032302.W404@xs4all.nl>

On Mon, Mar 12, 2001 at 05:58:23PM -0800, Jeremy Hylton wrote:
> Modified Files:
> 	frameobject.h 
> Log Message:

> There is also a C API change: PyFrame_New() is reverting to its
> pre-2.1 signature.  The change introduced by nested scopes was a
> mistake.  XXX Is this okay between beta releases?

It is definately fine by me ;-) And Guido's reason for not caring about it
breaking ("noone uses it") applies equally well to unbreaking it between
beta releases.

Backward-bigot-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From paulp@ActiveState.com  Tue Mar 13 03:01:14 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 19:01:14 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
References: <200103130013.QAA13925@valdes.csc.UVic.CA>
Message-ID: <3AAD8D7A.3634BC56@ActiveState.com>

John Aycock wrote:
> 
> ...
> 
> For any unambiguous
> grammar, the worst case drops to O(n^2), and for a set of grammars 
> which loosely coincides with the LR(k) grammars, the complexity drops 
> to O(n).

I'd say: "it's linear for optimal grammars for most programming
languages." But it doesn't warn you when you are making a "bad grammar"
(not LR(k)) so things just slow down as you add rules...

Is there a tutorial about how to make fast Spark grammars or should I go
back and re-read my compiler construction books?

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From barry@digicool.com  Tue Mar 13 02:56:42 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 21:56:42 -0500
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
 <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103120711.AAA09711@localhost.localdomain>
 <15021.22659.616556.298360@anthem.wooz.org>
 <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
 <15021.24921.998693.156809@anthem.wooz.org>
 <20010312161304.B2976@glacier.fnational.com>
Message-ID: <15021.35946.606279.267593@anthem.wooz.org>

>>>>> "NS" == Neil Schemenauer <nas@arctrix.com> writes:

    >> I thought PTL definitely included a "template" declaration
    >> keyword, a la, def, so they must have some solution here.  MEMs
    >> guys?

    NS> The correct term is "hack".  We do a re.sub on the text of the
    NS> module.  I considered building a new parsermodule with def
    NS> changed to template but haven't had time yet.  I think the
    NS> dominate cost when importing a PTL module is due stat() calls
    NS> driven by hairy Python code.

Ah, good to know, thanks.  I definitely think it would be A Cool Thing
if one could build a complete Python parser and compiler in Python.
Kind of along the lines of building the interpreter main loop in
Python as much as possible.  I know that /I'm/ not going to have any
time to contribute though (and others have more and better experience
in this area than I do).

-Barry


From paulp@ActiveState.com  Tue Mar 13 03:09:21 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 19:09:21 -0800
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
 <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103120711.AAA09711@localhost.localdomain>
 <15021.22659.616556.298360@anthem.wooz.org>
 <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
 <15021.24921.998693.156809@anthem.wooz.org>
 <20010312161304.B2976@glacier.fnational.com> <15021.35946.606279.267593@anthem.wooz.org>
Message-ID: <3AAD8F61.C61CAC85@ActiveState.com>

"Barry A. Warsaw" wrote:
> 
>...
> 
> Ah, good to know, thanks.  I definitely think it would be A Cool Thing
> if one could build a complete Python parser and compiler in Python.
> Kind of along the lines of building the interpreter main loop in
> Python as much as possible.  I know that /I'm/ not going to have any
> time to contribute though (and others have more and better experience
> in this area than I do).

I'm surprised that there are dozens of compiler compilers written in
Python but few people stepped forward to say that theirs supports Python
itself. mxTextTools has a Python parser...does anyone know how good it
is?

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From esr@thyrsus.com  Tue Mar 13 03:11:02 2001
From: esr@thyrsus.com (Eric S. Raymond)
Date: Mon, 12 Mar 2001 22:11:02 -0500
Subject: [Python-Dev] Re: [kbuild-devel] Re: CML2 compiler slowness
In-Reply-To: <200103130013.QAA13925@valdes.csc.UVic.CA>; from aycock@csc.UVic.CA on Mon, Mar 12, 2001 at 04:13:01PM -0800
References: <200103130013.QAA13925@valdes.csc.UVic.CA>
Message-ID: <20010312221102.A31473@thyrsus.com>

John Aycock <aycock@csc.UVic.CA>:
> The next version of SPARK uses some of my research work into Earley's
> algorithm and improves the speed quite dramatically.  It's not all
> ready to go yet, but I can send you my working version which will give
> you some idea of how fast it'll be for CML2.

I'd like to see it.

>                                             Also, I assume you're
> supplying a typestring() method to the parser class?  That speeds things
> up as well.

I supplied one.  The expression parser promptly dropped from 92% of
the total compiler run time to 87%, a whole 5% of improvement.

To paraphrase a famous line from E.E. "Doc" Smith, "I could eat a handful
of chad and *puke* a faster parser than that..."
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

[W]hat country can preserve its liberties, if its rulers are not
warned from time to time that [the] people preserve the spirit of
resistance?  Let them take arms...The tree of liberty must be
refreshed from time to time, with the blood of patriots and tyrants.
	-- Thomas Jefferson, letter to Col. William S. Smith, 1787 


From msw@redhat.com  Tue Mar 13 03:08:42 2001
From: msw@redhat.com (Matt Wilson)
Date: Mon, 12 Mar 2001 22:08:42 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>; from tim.one@home.com on Mon, Mar 12, 2001 at 09:07:46PM -0500
References: <20010312164705.C641@devserv.devel.redhat.com> <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>
Message-ID: <20010312220842.A14634@devserv.devel.redhat.com>

Right, but this isn't the problem that I'm describing.  Because mktemp
just return a "checked" filename, it is vulnerable to symlink attacks.
Python programs run as root have a small window of opportunity between
when mktemp checks for the existence of the temp file and when the
function calling mktemp actually uses it.

So, it's hostile out-of-process attacks I'm worrying about, and the
recent CVS changes don't address that.

Cheers,

Matt

On Mon, Mar 12, 2001 at 09:07:46PM -0500, Tim Peters wrote:
> 
> Adding to what Guido said, the 2.1 mktemp() finally bites the bullet and uses
> a mutex to ensure that no two threads (within a process) can ever generate
> the same filename.  The 2.0 mktemp() was indeed subject to races in this
> respect.  Freedom from cross-process races relies on using the pid in the
> filename too.


From tim.one@home.com  Tue Mar 13 03:40:28 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 12 Mar 2001 22:40:28 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com>

[Guido, to David Ascher]
> ...
> One thing we *could* agree to there, after I pressed some people: 1/2
> should return 0.5.

FWIW, in a show of hands at the devday session after you left, an obvious
majority said they did object to that 1/2 is 0 today.  This was bold in the
face of Paul Dubois's decibel-rich opposition <wink>.  There was no consensus
on what it *should* do instead, though.

> Possibly 1/2 should not be a binary floating point number -- but then
> 0.5 shouldn't either, and whatever happens, these (1/2 and 0.5) should
> have the same type, be it rational, binary float, or decimal float.

I don't know that imposing this formal simplicity is going to be a genuine
help, because the area it's addressing is inherently complex.  In such cases,
simplicity is bought at the cost of trying to wish away messy realities.
You're aiming for Python arithmetic that's about 5x simpler than Python
strings <0.7 wink>.

It rules out rationals because you already know how insisting on this rule
worked out in ABC (it didn't).

It rules out decimal floats because scientific users can't tolerate the
inefficiency of simulating arithmetic in software (software fp is at best
~10x slower than native fp, assuming expertly hand-optimized assembler
exploiting platform HW tricks), and aren't going to agree to stick physical
constants in strings to pass to some "BinaryFloat()" constructor.

That only leaves native HW floating-point, but you already know *that*
doesn't work for newbies either.

Presumably ABC used rationals because usability studies showed they worked
best (or didn't they test this?).  Presumably the TeachScheme! dialect of
Scheme uses rationals for the same reason.  Curiously, the latter behaves
differently depending on "language level":

> (define x (/ 2 3))
> x
2/3
> (+ x 0.5)
1.1666666666666665
>

That's what you get under the "Full Scheme" setting.  Under all other
settings (Beginning, Intermediate, and Advanced Student), you get this
instead:

> (define x (/ 2 3))
> x
2/3
> (+ x 0.5)
7/6
>

In those you have to tag 0.5 as being inexact in order to avoid having it
treated as ABC did (i.e., as an exact decimal rational):

> (+ x #i0.5)
#i1.1666666666666665
>

> (- (* .58 100) 58)   ; showing that .58 is treated as exact
0
> (- (* #i.58 100) 58) ; same IEEE result as Python when .58 tagged w/ #i
#i-7.105427357601002e-015
>

So that's their conclusion:  exact rationals are best for students at all
levels (apparently the same conclusion reached by ABC), but when you get to
the real world rationals are no longer a suitable meaning for fp literals
(apparently the same conclusion *I* reached from using ABC; 1/10 and 0.1 are
indeed very different beasts to me).

A hard question:  what if they're right?  That is, that you have to favor one
of newbies or experienced users at the cost of genuine harm to the other?



From aycock@csc.UVic.CA  Tue Mar 13 03:32:54 2001
From: aycock@csc.UVic.CA (John Aycock)
Date: Mon, 12 Mar 2001 19:32:54 -0800
Subject: [Python-Dev] Re: [kbuild-devel] Re: CML2 compiler slowness
Message-ID: <200103130332.TAA17222@valdes.csc.UVic.CA>

Eric the Poet <esr@thyrsus.com> writes:
| To paraphrase a famous line from E.E. "Doc" Smith, "I could eat a handful
| of chad and *puke* a faster parser than that..."

Indeed.  Very colorful.

I'm sending you the in-development version of SPARK in a separate
message.

John


From martin@loewis.home.cs.tu-berlin.de  Tue Mar 13 06:06:13 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 13 Mar 2001 07:06:13 +0100
Subject: [Python-Dev] more Solaris extension grief
Message-ID: <200103130606.f2D66D803507@mira.informatik.hu-berlin.de>

gcc -shared  ./PyEnforcer.o  -L/home/gvwilson/cozumel/merlot/enforcer
-lenforcer -lopenssl -lstdc++  -o ./PyEnforcer.so

> Text relocation remains                         referenced
>    against symbol                  offset      in file
> istream type_info function          0x1c
> /usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
> istream type_info function          0x18

> Has anyone seen this problem before?

Yes, there have been a number of SF bug reports on that, and proposals
to fix that. It's partly a policy issue, but I believe all these
patches have been wrong, as the problem is not in Python.

When you build a shared library, it ought to be
position-independent. If it is not, the linker will need to put
relocation instructions into the text segment, which means that the
text segment has to be writable. In turn, the text of the shared
library will not be demand-paged anymore, but copied into main memory
when the shared library is loaded. Therefore, gcc asks ld to issue an
error if non-PIC code is integrated into a shared object.

To have the compiler emit position-independent code, you need to pass
the -fPIC option when producing object files. You not only need to do
that for your own object files, but for the object files of all the
static libraries you are linking with. In your case, the static
library is libstdc++.a.

Please note that linking libstdc++.a statically not only means that
you lose position-independence; it also means that you end up with a
copy of libstdc++.a in each extension module that you link with it.
In turn, global objects defined in the library may be constructed
twice (I believe).

There are a number of solutions:

a) Build libstdc++ as a  shared library. This is done on Linux, so
   you don't get the error on Linux.

b) Build libstdc++.a using -fPIC. The gcc build process does not
   support such a configuration, so you'ld need to arrange that
   yourself.

c) Pass the -mimpure-text option to gcc when linking. That will make
   the text segment writable, and silence the linker.

There was one proposal that looks like it would work, but doesn't:

d) Instead of linking with -shared, link with -G. That forgets to link
   the shared library startup files (crtbeginS/crtendS) into the shared
   library, which in turn means that constructors of global objects will
   fail to work; it also does a number of other things incorrect.

Regards,
Martin


From martin@loewis.home.cs.tu-berlin.de  Tue Mar 13 06:12:41 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 13 Mar 2001 07:12:41 +0100
Subject: [Python-Dev] CML2 compiler slowness
Message-ID: <200103130612.f2D6Cfa03574@mira.informatik.hu-berlin.de>

> Anybody over on the Python side know of a faster alternative LL or
> LR(1) parser generator or factory class?

I'm using Yapps (http://theory.stanford.edu/~amitp/Yapps/), and find
it quite convenient, and also sufficiently fast (it gives, together
with sre, a factor of two or three over a flex/bison solution of XPath
parsing). I've been using my own lexer (using sre), both to improve
speed and to deal with the subtleties (sp?) of XPath tokenization.  If
you can send me the grammar and some sample sentences, I can help
writing a Yapps parser (as I think Yapps is an under-used kit).

Again, this question is probably better asked on python-list than
python-dev...

Regards,
Martin


From trentm@ActiveState.com  Tue Mar 13 06:56:12 2001
From: trentm@ActiveState.com (Trent Mick)
Date: Mon, 12 Mar 2001 22:56:12 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:19:39PM -0500
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <20010312225612.H8460@ActiveState.com>

I just want to add that one of the main participants in the Numeric Coercion
session was Paul Dubois and I am not sure that he is on python-dev. He should
probably be in this dicussion.

Trent

-- 
Trent Mick
TrentM@ActiveState.com


From guido@digicool.com  Tue Mar 13 09:58:32 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 04:58:32 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Include frameobject.h,2.30,2.31
In-Reply-To: Your message of "Tue, 13 Mar 2001 03:23:02 +0100."
 <20010313032302.W404@xs4all.nl>
References: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>
 <20010313032302.W404@xs4all.nl>
Message-ID: <200103130958.EAA29951@cj20424-a.reston1.va.home.com>

> On Mon, Mar 12, 2001 at 05:58:23PM -0800, Jeremy Hylton wrote:
> > Modified Files:
> > 	frameobject.h 
> > Log Message:
> 
> > There is also a C API change: PyFrame_New() is reverting to its
> > pre-2.1 signature.  The change introduced by nested scopes was a
> > mistake.  XXX Is this okay between beta releases?
> 
> It is definately fine by me ;-) And Guido's reason for not caring about it
> breaking ("noone uses it") applies equally well to unbreaking it between
> beta releases.

This is a good thing!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Tue Mar 13 10:18:35 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 05:18:35 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Mon, 12 Mar 2001 22:40:28 EST."
 <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com>
Message-ID: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>

> [Guido, to David Ascher]
> > ...
> > One thing we *could* agree to there, after I pressed some people: 1/2
> > should return 0.5.
> 
> FWIW, in a show of hands at the devday session after you left, an obvious
> majority said they did object to that 1/2 is 0 today.  This was bold in the
> face of Paul Dubois's decibel-rich opposition <wink>.  There was no consensus
> on what it *should* do instead, though.
> 
> > Possibly 1/2 should not be a binary floating point number -- but then
> > 0.5 shouldn't either, and whatever happens, these (1/2 and 0.5) should
> > have the same type, be it rational, binary float, or decimal float.
> 
> I don't know that imposing this formal simplicity is going to be a genuine
> help, because the area it's addressing is inherently complex.  In such cases,
> simplicity is bought at the cost of trying to wish away messy realities.
> You're aiming for Python arithmetic that's about 5x simpler than Python
> strings <0.7 wink>.
> 
> It rules out rationals because you already know how insisting on this rule
> worked out in ABC (it didn't).
> 
> It rules out decimal floats because scientific users can't tolerate the
> inefficiency of simulating arithmetic in software (software fp is at best
> ~10x slower than native fp, assuming expertly hand-optimized assembler
> exploiting platform HW tricks), and aren't going to agree to stick physical
> constants in strings to pass to some "BinaryFloat()" constructor.
> 
> That only leaves native HW floating-point, but you already know *that*
> doesn't work for newbies either.

I'd like to argue about that.  I think the extent to which HWFP
doesn't work for newbies is mostly related to the change we made in
2.0 where repr() (and hence the interactive prompt) show full
precision, leading to annoyances like repr(1.1) == '1.1000000000000001'.

I've noticed that the number of complaints I see about this went way
up after 2.0 was released.

I expect that most newbies don't use floating point in a fancy way,
and would never notice it if it was slightly off as long as the output
was rounded like it was before 2.0.

> Presumably ABC used rationals because usability studies showed they worked
> best (or didn't they test this?).

No, I think at best the usability studies showed that floating point
had problems that the ABC authors weren't able to clearly explain to
newbies.  There was never an experiment comparing FP to rationals.

> Presumably the TeachScheme! dialect of
> Scheme uses rationals for the same reason.

Probably for the same reasons.

> Curiously, the latter behaves
> differently depending on "language level":
> 
> > (define x (/ 2 3))
> > x
> 2/3
> > (+ x 0.5)
> 1.1666666666666665
> >
> 
> That's what you get under the "Full Scheme" setting.  Under all other
> settings (Beginning, Intermediate, and Advanced Student), you get this
> instead:
> 
> > (define x (/ 2 3))
> > x
> 2/3
> > (+ x 0.5)
> 7/6
> >
> 
> In those you have to tag 0.5 as being inexact in order to avoid having it
> treated as ABC did (i.e., as an exact decimal rational):
> 
> > (+ x #i0.5)
> #i1.1666666666666665
> >
> 
> > (- (* .58 100) 58)   ; showing that .58 is treated as exact
> 0
> > (- (* #i.58 100) 58) ; same IEEE result as Python when .58 tagged w/ #i
> #i-7.105427357601002e-015
> >
> 
> So that's their conclusion:  exact rationals are best for students at all
> levels (apparently the same conclusion reached by ABC), but when you get to
> the real world rationals are no longer a suitable meaning for fp literals
> (apparently the same conclusion *I* reached from using ABC; 1/10 and 0.1 are
> indeed very different beasts to me).

Another hard question: does that mean that 1 and 1.0 are also very
different beasts to you?  They weren't to the Alice users who started
this by expecting 1/4 to represent a quarter turn.

> A hard question:  what if they're right?  That is, that you have to favor one
> of newbies or experienced users at the cost of genuine harm to the other?

You know where I'm leaning...  I don't know that newbies are genuinely
hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
that it prints 1.1, and be happy; the persistent ones will try
1.1**2-1.21, ask for an explanation, and get a introduction to
floating point.  This *doesnt'* have to explain all the details, just
the two facts that you can lose precision and that 1.1 isn't
representable exactly in binary.  Only the latter should be new to
them.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From paulp@ActiveState.com  Tue Mar 13 11:45:21 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Tue, 13 Mar 2001 03:45:21 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <3AAE0851.3B683941@ActiveState.com>

Guido van Rossum wrote:
> 
>...
> 
> You know where I'm leaning...  I don't know that newbies are genuinely
> hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
> that it prints 1.1, and be happy; the persistent ones will try
> 1.1**2-1.21, ask for an explanation, and get a introduction to
> floating point.  This *doesnt'* have to explain all the details, just
> the two facts that you can lose precision and that 1.1 isn't
> representable exactly in binary.  Only the latter should be new to
> them.

David Ascher suggested during the talk that comparisons of floats could
raise a warning unless you turned that warning off (which only
knowledgable people would do). I think that would go a long way to
helping them find and deal with serious floating point inaccuracies in
their code.

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)


From guido@digicool.com  Tue Mar 13 11:42:35 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 06:42:35 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Tue, 13 Mar 2001 03:45:21 PST."
 <3AAE0851.3B683941@ActiveState.com>
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
 <3AAE0851.3B683941@ActiveState.com>
Message-ID: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>

[me]
> > You know where I'm leaning...  I don't know that newbies are genuinely
> > hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
> > that it prints 1.1, and be happy; the persistent ones will try
> > 1.1**2-1.21, ask for an explanation, and get a introduction to
> > floating point.  This *doesnt'* have to explain all the details, just
> > the two facts that you can lose precision and that 1.1 isn't
> > representable exactly in binary.  Only the latter should be new to
> > them.

[Paul]
> David Ascher suggested during the talk that comparisons of floats could
> raise a warning unless you turned that warning off (which only
> knowledgable people would do). I think that would go a long way to
> helping them find and deal with serious floating point inaccuracies in
> their code.

You mean only for == and !=, right?  This could easily be implemented
now that we have rich comparisons.  We should wait until 2.2 though --
we haven't clearly decided that this is the way we want to go.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From thomas@xs4all.net  Tue Mar 13 11:54:19 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 12:54:19 +0100
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Tue, Mar 13, 2001 at 05:18:35AM -0500
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <20010313125418.A404@xs4all.nl>

On Tue, Mar 13, 2001 at 05:18:35AM -0500, Guido van Rossum wrote:

> I think the extent to which HWFP doesn't work for newbies is mostly
> related to the change we made in 2.0 where repr() (and hence the
> interactive prompt) show full precision, leading to annoyances like
> repr(1.1) == '1.1000000000000001'.
> 
> I've noticed that the number of complaints I see about this went way up
> after 2.0 was released.
> 
> I expect that most newbies don't use floating point in a fancy way, and
> would never notice it if it was slightly off as long as the output was
> rounded like it was before 2.0.

I suspect that the change in float.__repr__() did reduce the number of
suprises over something like this, though: (taken from a 1.5.2 interpreter)

>>> x = 1.000000000001
>>> x
1.0
>>> x == 1.0
0

If we go for the HWFP + loosened precision in printing you seem to prefer,
we should be concious about this, possibly raising a warning when comparing
floats in this way. (Or in any way at all ? Given that when you compare two
floats, you either didn't intend to, or your name is Tim or Moshe and you
would be just as happy writing the IEEE754 binary representation directly :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tismer@tismer.com  Tue Mar 13 13:29:53 2001
From: tismer@tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 14:29:53 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAE20D1.5D375ECB@tismer.com>

Ok, I'm adding some comments.

Jeremy Hylton wrote:
> 
> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome message:
> There is a tension between Stackless Python on the one hand and making
> Python easy to embed in and extend with C programs on the other hand.
> The PEP describes this as the major difficulty with C Python.  I won't
> repeat the discussion of the problem there.
> 
> I would like to seem a somewhat more detailed discussion of this in
> the PEP.  I think it's an important issue to work out before making a
> decision about a stack-light patch.
> 
> The problem of nested interpreters and the C API seems to come up in
> several ways.  These are all touched on in the PEP, but not in much
> detail.  This message is mostly a request for more detail :-).
> 
>   - Stackless disallows transfer out of a nested interpreter.  (It
>     has, too; anything else would be insane.)  Therefore, the
>     specification for microthreads &c. will be complicated by a
>     listing of the places where control transfers are not possible.

To be more precise: Stackless catches any attempt to transfer to a
frame that has been locked (is run) by an interpreter that is not
the topmost of the C stack. That's all. You might even run Microthreads
in the fifth interpreter recursion, and later return to other
(stalled) microthreads, if only this condition is met.

>     The PEP says this is not ideal, but not crippling.  I'd like to
>     see an actual spec for where it's not allowed in pure Python.  It
>     may not be crippling, but it may be a tremendous nuisance in
>     practice; e.g. remember that __init__ calls create a critical
>     section.

At the moment, *all* of the __xxx__ methods are restricted to stack-
like behavior. __init__ and __getitem__ should probably be the first
methods beyond Stack-lite, which should get extra treatment.

>   - If an application makes use of C extensions that do create nested
>     interpreters, they will make it even harder to figure out when
>     Python code is executing in a nested interpreter.  For a large
>     systems with several C extensions, this could be complicated.  I
>     presume, therefore, that there will be a C API for playing nice
>     with stackless.  I'd like to see a PEP that discusses what this C
>     API would look like.

Ok. I see the need for an interface for frames here.
An extension should be able to create a frame, together with
necessary local memory.
It appears to need two or three functions in the extension:
1) Preparation phase
   The extension provides an "interpreter" function which is in
   charge to handle this frame. The preparation phase puts a
   pointer to this function into the frame.
2) Execution phase
   The frame is run by the frame dispatcher, which calls the
   interpreter function.
   For every nested call into Python, the interpreter function
   needs to return with a special signal for the scheduler,
   that there is now a different frame to be scheduled.
   These notifications, and midifying the frame chain, should
   be hidden by API calls.
3) cleanup phase (necessary?)
   A finalization function may be (optionally) provided for
   the frame destructor.

>   - Would allow of the internal Python calls that create nested
>     functions be replaced?  I'm thinking of things like
>     PySequence_Fast() and the ternary_op() call in abstract.c.  How
>     hard will it be to convert all these functions to be stackless?

PySequence_Fast() calls back into PySequence_Tuple(). In the generic
sequence case, it calls 
       PyObject *item = (*m->sq_item)(v, i);

This call may now need to return to the frame dispatcher without
having its work done. But we cannot do this, because the current
API guarantees that this method will return either with a result
or an exception. This means, we can of course modify the interpreter
to deal with a third kind of state, but this would probably break
some existing extensions.
It was the reason why I didn't try to go further here: Whatever
is exposed to other code but Python itself might break by such
an extension, unless we find a way to distinguish *who* calls.
On the other hand, if we are really at a new Python,
incompatibility would be just ok, and the problem would vanish.

>     How many functions are affected?  And how many places are they
>     called from?

This needs more investigation.

>   - What is the performance impact of adding the stackless patches?  I
>     think Christian mentioned a 10% slowdown at dev day, which doesn't
>     sound unreasonable.  Will reworking the entire interpreter to be
>     stackless make that slowdown larger or smaller?

No, it is about 5 percent. My optimization gains about 15 percent,
which makes a win of 10 percent overall.
The speed loss seems to be related to extra initialization calls
for frames, and the somewhat more difficult parameter protocol.
The fact that recusions are turned into repetitive calls from
a scheduler seems to have no impact. In other words: Further
"stackless" versions of internal functions will probably not
produce another slowdown.
This matches the observation that the number of function calls
is nearly the same, whether recursion is used or stackless.
It is mainly the order of function calls that is changed.

> One other set of issues, that is sort-of out of bounds for this
> particular PEP, is what control features do we want that can only be
> implemented with stackless.  Can we implement generators or coroutines
> efficiently without a stackless approach?

For some limitated view of generators: Yes, absolutely. *)
For coroutines: For sure not.

*) generators which live in the context of the calling
function, like the stack-based generator implementation of
one of the first ICON implementations, I think.
That is, these generators cannot be re-used somewhere else.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From uche.ogbuji@fourthought.com  Tue Mar 13 14:47:17 2001
From: uche.ogbuji@fourthought.com (Uche Ogbuji)
Date: Tue, 13 Mar 2001 07:47:17 -0700
Subject: [Python-Dev] comments on PEP 219
In-Reply-To: Message from Jeremy Hylton <jeremy@alum.mit.edu>
 of "Mon, 12 Mar 2001 19:14:47 EST." <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103131447.HAA32016@localhost.localdomain>

> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome message:
> There is a tension between Stackless Python on the one hand and making
> Python easy to embed in and extend with C programs on the other hand.
> The PEP describes this as the major difficulty with C Python.  I won't
> repeat the discussion of the problem there.

You know, even though I would like to have some of the Stackless features, my 
skeptical reaction to some of the other Grand Ideas circulating at IPC9, 
including static types leads me to think I might not be thinking clearly on 
the Stackless question.

I think that if there is no way to address the many important concerns raised 
by people at the Stackless session (minus the "easy to learn" argument IMO), 
Stackless is probably a bad idea to shove into Python.

I still think that the Stackless execution structure would be a huge 
performance boost in many XML processing tasks, but that's not worth making 
Python intractable for extension writers.

Maybe it's not so bad for Stackless to remain a branch, given how closely 
Christian can work with Pythonlabs.  The main problem is the load on 
Christian, which would be mitigated as he gained collaborators.  The other 
problem would be that interested extension writers might need to maintain 2 
code-bases as well.  Maybe one could develop some sort of adaptor.

Or maybe Stackless should move to core, but only in P3K in which extension 
writers should be expecting weird and wonderful new models, anyway (right?)


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji@fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python




From tismer@tismer.com  Tue Mar 13 15:12:03 2001
From: tismer@tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 16:12:03 +0100
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
References: <200103131447.HAA32016@localhost.localdomain>
Message-ID: <3AAE38C3.2C9BAA08@tismer.com>


Uche Ogbuji wrote:
> 
> > Here are some comments on Gordon's new draft of PEP 219 and the
> > stackless dev day discussion at Spam 9.
> >
> > I left the dev day discussion with the following takehome message:
> > There is a tension between Stackless Python on the one hand and making
> > Python easy to embed in and extend with C programs on the other hand.
> > The PEP describes this as the major difficulty with C Python.  I won't
> > repeat the discussion of the problem there.
> 
> You know, even though I would like to have some of the Stackless features, my
> skeptical reaction to some of the other Grand Ideas circulating at IPC9,
> including static types leads me to think I might not be thinking clearly on
> the Stackless question.
> 
> I think that if there is no way to address the many important concerns raised
> by people at the Stackless session (minus the "easy to learn" argument IMO),
> Stackless is probably a bad idea to shove into Python.

Maybe I'm repeating myself, but I'd like to clarify:
I do not plan to introduce anything that forces anybody to change
her code. This is all about extending the current capabilities.

> I still think that the Stackless execution structure would be a huge
> performance boost in many XML processing tasks, but that's not worth making
> Python intractable for extension writers.

Extension writers only have to think about the Stackless
protocol (to be defined) if they want to play the Stackless
game. If this is not intended, this isn't all that bad. It only means
that they cannot switch a microthread while the extension does
a callback.
But that is all the same as today. So how could Stackless make
extensions intractable, unless someone *wants* to get get all of it?

An XML processor in C will not take advantage form Stackless unless
it is desinged for that. But nobody enforces this. Stackless can
behave as recursive as standard Python, and it is completely aware
about recursions. It will not break.

It is the programmers choice to make a switchable extension
or not. This is just more than today to choose.

> Maybe it's not so bad for Stackless to remain a branch, given how closely
> Christian can work with Pythonlabs.  The main problem is the load on
> Christian, which would be mitigated as he gained collaborators.  The other
> problem would be that interested extension writers might need to maintain 2
> code-bases as well.  Maybe one could develop some sort of adaptor.
> 
> Or maybe Stackless should move to core, but only in P3K in which extension
> writers should be expecting weird and wonderful new models, anyway (right?)

That's no alternative. Remember Guido's words:
P3K will never become reality. It is a virtual
place where to put all the things that might happen in some future.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From esr@snark.thyrsus.com  Tue Mar 13 15:32:51 2001
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Tue, 13 Mar 2001 10:32:51 -0500
Subject: [Python-Dev] CML2 compiler speedup
Message-ID: <200103131532.f2DFWpw04691@snark.thyrsus.com>

I bit the bullet and hand-rolled a recursive-descent expression parser
for CML2 to replace the Earley-algorithm parser described in my
previous note.  It is a little more than twice as fast as the SPARK
code, cutting the CML2 compiler runtime almost exactly in half.

Sigh.  I had been intending to recommend SPARK for the Python standard
library -- as I pointed out in my PC9 paper, it would be the last
piece stock Python needs to be an effective workbench for
minilanguage construction.  Unfortunately I'm now convinced Paul
Prescod is right and it's too slow for production use, at least at
version 0.6.1.  

John Aycock says 0.7 will be substantially faster; I'll keep an eye on
this.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

The price of liberty is, always has been, and always will be blood.  The person
who is not willing to die for his liberty has already lost it to the first
scoundrel who is willing to risk dying to violate that person's liberty.  Are
you free? 
	-- Andrew Ford


From moshez@zadka.site.co.il  Tue Mar 13 06:20:47 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Tue, 13 Mar 2001 08:20:47 +0200
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
Message-ID: <E14ciAp-0005dJ-00@darjeeling>

After discussions in IPC9 one of the decisions was to set up a mailing
list for discussion of the numeric model of Python.

Subscribe here:

    http://lists.sourceforge.net/lists/listinfo/python-numerics

Or here:

    python-numerics-request@lists.sourceforge.net

I will post my PEPs there as soon as an initial checkin is completed.
Please direct all further numeric model discussion there.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From paul@pfdubois.com  Tue Mar 13 16:38:35 2001
From: paul@pfdubois.com (Paul F. Dubois)
Date: Tue, 13 Mar 2001 08:38:35 -0800
Subject: [Python-Dev] Kinds
Message-ID: <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com>

This is a multi-part message in MIME format.

------=_NextPart_000_0008_01C0AB98.FE86CE00
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: 7bit

I was asked to write down what I said at the dev day session about kinds. I
have put this in the form of a proposal-like writeup which is attached. I
hope this helps you undestand what I meant.


------=_NextPart_000_0008_01C0AB98.FE86CE00
Content-Type: text/plain;
	name="kinds.txt"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
	filename="kinds.txt"

A proposal on kinds for Python

This proposal aims to give the user optional control over the precision =
and range of numeric computations so that a computation can be written =
once and run anywhere with at least the desired precision and range.

1. Each Python compiler may define as many "kinds" of integer and =
floating point numbers as it likes,
except that it must propose at least one kind of integer corresponding =
to the existing int, and must propose at least one kind of floating =
point number, equivalent to the present float. These two required kinds =
are called the default integer kind and the default float kind. The =
range and precision of the default kinds are processor dependent, as at =
present.
Each kind other than the default is given a processor-dependent small =
integer label called the kind number.

1. The builtin functions int and float are given an optional second =
argument with default value None.
   int(x, kind=3DNone)
   float(x, kind=3DNone)
   If kind is None, these attempt to convert x to the default kind of =
that type. If the kind is a small integer, the processor uses a =
conversion routine to convert x to that kind, if that kind number is =
defined on this processor. Otherwise,
an exception is generated. Floating point numbers are truncated to less =
precision if necessary but if they do not fit in the target's dynamic =
range an exception is thrown.

2. Two new builtin functions are defined. They return kind numbers.
   selected_int_kind (n)
      -- return the number of a kind that will hold an integer number in =
the range -10**n to 10**n.
   selected_float_kind (nd, n)
      -- return the number of a kind that will hold a floating-point =
number with at least nd digits of precision and
      at least a dynamic range of 10**+/- n
   If no kind with the desired qualities exists an exception is thrown.

3. Modification to the literal parser ala Fortran 90.
   An integer or floating point literal may be followed by _name, where =
name is a legal identifier. For example,
   1.23e10_precise or 222_big. This is syntactic sugar for float(x, =
name) or int(x, name) respectively.


   Example:

   single =3D selected_float_kind(6, 90)
   double =3D selected_float_kind(15, 300)
   x =3D 1.e100_double
   y =3D 1.e20_single
   z =3D 1.2
   w =3D x * float(z, double)
   u =3D float(x, single) * y
 =20
  Open questions: specify exactly what exception is thrown in each case, =
and whether or not there is a standard kind number=20
  or name for the existing long. 
------=_NextPart_000_0008_01C0AB98.FE86CE00--



From guido@digicool.com  Tue Mar 13 16:43:42 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 11:43:42 -0500
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: Your message of "Tue, 06 Mar 2001 07:51:49 CST."
 <15012.60277.150431.237935@beluga.mojam.com>
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
 <15012.60277.150431.237935@beluga.mojam.com>
Message-ID: <200103131643.LAA01072@cj20424-a.reston1.va.home.com>

> Two things come to mind.  One, perhaps a more careful coding of urllib to
> avoid exposing names it shouldn't export would be a better choice.  Two,
> perhaps those symbols that are not documented but that would be useful when
> extending urllib functionality should be documented and added to __all__.
> 
> Here are the non-module names I didn't include in urllib.__all__:

Let me annotate these in-line:

>     MAXFTPCACHE			No
>     localhost				Yes
>     thishost				Yes
>     ftperrors				Yes
>     noheaders				No
>     ftpwrapper			No
>     addbase				No
>     addclosehook			No
>     addinfo				No
>     addinfourl			No
>     basejoin				Yes
>     toBytes				No
>     unwrap				Yes
>     splittype				Yes
>     splithost				Yes
>     splituser				Yes
>     splitpasswd			Yes
>     splitport				Yes
>     splitnport			Yes
>     splitquery			Yes
>     splittag				Yes
>     splitattr				Yes
>     splitvalue			Yes
>     splitgophertype			Yes
>     always_safe			No
>     getproxies_environment		No
>     getproxies			Yes
>     getproxies_registry		No
>     test1				No
>     reporthook			No
>     test				No
>     main				No
> 
> None are documented, so there are no guarantees if you use them (I have
> subclassed addinfourl in the past myself).

Note that there's a comment block "documenting" all the split*()
functions, indicating that I intended them to be public.  For the
rest, I'm making a best guess based on how useful these things are and
how closely tied to the implementation etc.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy@alum.mit.edu  Tue Mar 13 02:42:20 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 21:42:20 -0500 (EST)
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
In-Reply-To: <3AAE38C3.2C9BAA08@tismer.com>
References: <200103131447.HAA32016@localhost.localdomain>
 <3AAE38C3.2C9BAA08@tismer.com>
Message-ID: <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "CT" == Christian Tismer <tismer@tismer.com> writes:

  CT> Maybe I'm repeating myself, but I'd like to clarify: I do not
  CT> plan to introduce anything that forces anybody to change her
  CT> code. This is all about extending the current capabilities.

The problem with this position is that C code that uses the old APIs
interferes in odd ways with features that depend on stackless,
e.g. the __xxx__ methods.[*]  If the old APIs work but are not
compatible, we'll end up having to rewrite all our extensions so that
they play nicely with stackless.

If we change the core and standard extensions to use stackless
interfaces, then this style will become the standard style.  If the
interface is simple, this is no problem.  If the interface is complex,
it may be a problem.  My point is that if we change the core APIs, we
place a new burden on extension writers.

Jeremy

    [*] If we fix the type-class dichotomy, will it have any effect on
    the stackful nature of some of these C calls?


From jeremy@alum.mit.edu  Tue Mar 13 02:47:41 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 21:47:41 -0500 (EST)
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: <E14ciAp-0005dJ-00@darjeeling>
References: <E14ciAp-0005dJ-00@darjeeling>
Message-ID: <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>

We've spun off a lot of new lists recently.  I don't particularly care
for this approach, because I sometimes feel like I spend more time
subscribing to new lists than I do actually reading them <0.8 wink>.

I assume that most people are relieved to have the traffic taken off
python-dev.  (I can't think of any other reason to create a separate
list.)  But what's the well-informed Python hacker to do?  Subscribe
to dozens of different lists to discuss each different feature?

A possible solution: python-dev-all@python.org.  This list would be
subscribed to each of the special topic mailing lists.  People could
subscribe to it to get all of the mail without having to individually
subscribe to all the sublists.  Would this work?

Jeremy


From barry@digicool.com  Tue Mar 13 17:12:19 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Tue, 13 Mar 2001 12:12:19 -0500
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
References: <E14ciAp-0005dJ-00@darjeeling>
 <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15022.21747.94249.599599@anthem.wooz.org>

There was some discussions at IPC9 about implementing `topics' in
Mailman which I think would solve this problem nicely.  I don't have
time to go into much details now, and it's definitely a medium-term
solution (since other work is taking priority right now).

-Barry


From aycock@csc.UVic.CA  Tue Mar 13 16:54:48 2001
From: aycock@csc.UVic.CA (John Aycock)
Date: Tue, 13 Mar 2001 08:54:48 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <200103131654.IAA22731@valdes.csc.UVic.CA>

| From paulp@ActiveState.com Mon Mar 12 18:39:28 2001
| Is there a tutorial about how to make fast Spark grammars or should I go
| back and re-read my compiler construction books?

My advice would be to avoid heavy use of obviously ambiguous
constructions, like defining expressions to be
	E ::= E op E

Aside from that, the whole point of SPARK is to have the language you're
implementing up and running, fast -- even if you don't have a lot of
background in compiler theory.  It's not intended to spit out blazingly
fast production compilers.  If the result isn't fast enough for your
purposes, then you can replace SPARK components with faster ones; you're
not locked in to using the whole package.  Or, if you're patient, you can
wait for the tool to improve :-)

John


From gmcm@hypernet.com  Tue Mar 13 17:17:39 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 12:17:39 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAE0FE3.2206.7AB85588@localhost>

[Jeremy]
> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome
> message: There is a tension between Stackless Python on the one
> hand and making Python easy to embed in and extend with C
> programs on the other hand. The PEP describes this as the major
> difficulty with C Python.  I won't repeat the discussion of the
> problem there.

Almost all of the discussion about interpreter recursions is 
about completeness, *not* about usability. If you were to 
examine all the Stackless using apps out there, I think you 
would find that they rely on a stackless version of only one 
builtin - apply().

I can think of 2 practical situations in which it would be *nice* 
to be rid of the recursion:

 - magic methods (__init__, __getitem__ and __getattr__ in 
particular). But magic methods are a convenience. There's 
absolutely nothing there that can't be done another way.

 - a GUI. Again, no big deal, because GUIs impose all kinds of 
restrictions to begin with. If you use a GUI with threads, you 
almost always have to dedicate one thread (usually the main 
one) to the GUI and be careful that the other threads don't 
touch the GUI directly. It's basically the same issue with 
Stackless.
 
As for the rest of the possible situations, demand is 
nonexistant. In an ideal world, we'd never have to answer the 
question "how come it didn't work?". But put on you 
application programmers hat for a moment and see if you can 
think of a legitimate reason for, eg, one of the objects in an 
__add__ wanting to make use of a pre-existing coroutine 
inside the __add__ call. [Yeah, Tm can come up with a 
reason, but I did say "legitimate".]

> I would like to seem a somewhat more detailed discussion of this
> in the PEP.  I think it's an important issue to work out before
> making a decision about a stack-light patch.

I'm not sure why you say that. The one comparable situation 
in normal Python is crossing threads in callbacks. With the 
exception of a couple of complete madmen (doing COM 
support), everyone else learns to avoid the situation. [Mark 
doesn't even claim to know *how* he solved the problem 
<wink>].
 
> The problem of nested interpreters and the C API seems to come up
> in several ways.  These are all touched on in the PEP, but not in
> much detail.  This message is mostly a request for more detail
> :-).
> 
>   - Stackless disallows transfer out of a nested interpreter. 
>   (It
>     has, too; anything else would be insane.)  Therefore, the
>     specification for microthreads &c. will be complicated by a
>     listing of the places where control transfers are not
>     possible. The PEP says this is not ideal, but not crippling. 
>     I'd like to see an actual spec for where it's not allowed in
>     pure Python.  It may not be crippling, but it may be a
>     tremendous nuisance in practice; e.g. remember that __init__
>     calls create a critical section.

The one instance I can find on the Stackless list (of 
attempting to use a continuation across interpreter 
invocations) was a call the uthread.wait() in __init__. Arguably 
a (minor) nuisance, arguably bad coding practice (even if it 
worked).

I encountered it when trying to make a generator work with a 
for loop. So you end up using a while loop <shrug>.

It's disallowed where ever it's not accomodated. Listing those 
cases is probably not terribly helpful; I bet even Guido is 
sometimes surprised at what actually happens under the 
covers. The message "attempt to run a locked frame" is not 
very meaningful to the Stackless newbie, however.
 
[Christian answered the others...]


- Gordon


From DavidA@ActiveState.com  Tue Mar 13 17:25:49 2001
From: DavidA@ActiveState.com (David Ascher)
Date: Tue, 13 Mar 2001 09:25:49 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>
Message-ID: <PLEJJNOHDIGGLDPOGPJJEEPNCNAA.DavidA@ActiveState.com>

GvR:

> [Paul]
> > David Ascher suggested during the talk that comparisons of floats could
> > raise a warning unless you turned that warning off (which only
> > knowledgable people would do). I think that would go a long way to
> > helping them find and deal with serious floating point inaccuracies in
> > their code.
>
> You mean only for == and !=, right?

Right.

> We should wait until 2.2 though --
> we haven't clearly decided that this is the way we want to go.

Sure.  It was just a suggestion for a way to address the inherent problems
in having newbies work w/ FP (where newbie in this case is 99.9% of the
programming population, IMO).

-david



From thomas@xs4all.net  Tue Mar 13 18:08:05 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 19:08:05 +0100
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Mar 12, 2001 at 09:47:41PM -0500
References: <E14ciAp-0005dJ-00@darjeeling> <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <20010313190805.C404@xs4all.nl>

On Mon, Mar 12, 2001 at 09:47:41PM -0500, Jeremy Hylton wrote:

> We've spun off a lot of new lists recently.  I don't particularly care
> for this approach, because I sometimes feel like I spend more time
> subscribing to new lists than I do actually reading them <0.8 wink>.

And even if they are seperate lists, people keep crossposting, completely
negating the idea behind seperate lists. ;P I think the main reason for
separate lists is to allow non-python-dev-ers easy access to the lists. 

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gmcm@hypernet.com  Tue Mar 13 18:29:56 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 13:29:56 -0500
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
In-Reply-To: <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE38C3.2C9BAA08@tismer.com>
Message-ID: <3AAE20D4.25660.7AFA8206@localhost>

> >>>>> "CT" == Christian Tismer <tismer@tismer.com> writes:
> 
>   CT> Maybe I'm repeating myself, but I'd like to clarify: I do
>   not CT> plan to introduce anything that forces anybody to
>   change her CT> code. This is all about extending the current
>   capabilities.

[Jeremy] 
> The problem with this position is that C code that uses the old
> APIs interferes in odd ways with features that depend on
> stackless, e.g. the __xxx__ methods.[*]  If the old APIs work but
> are not compatible, we'll end up having to rewrite all our
> extensions so that they play nicely with stackless.

I don't understand. Python code calls C extension. C 
extension calls Python callback which tries to use a pre-
existing coroutine. How is the "interference"? The callback 
exists only because the C extension has an API that uses 
callbacks. 

Well, OK, the callback doesn't have to be explicit. The C can 
go fumbling around in a passed in object and find something 
callable. But to call it "interference", I think you'd have to have 
a working program which stopped working when a C extension 
crept into it without the programmer noticing <wink>.

> If we change the core and standard extensions to use stackless
> interfaces, then this style will become the standard style.  If
> the interface is simple, this is no problem.  If the interface is
> complex, it may be a problem.  My point is that if we change the
> core APIs, we place a new burden on extension writers.

This is all *way* out of scope, but if you go the route of 
creating a pseudo-frame for the C code, it seems quite 
possible that the interface wouldn't have to change at all. We 
don't need any more args into PyEval_EvalCode. We don't 
need any more results out of it. Christian's stackless map 
implementation is proof-of-concept that you can do this stuff.

The issue (if and when we get around to "truly and completely 
stackless") is complexity for the Python internals 
programmer, not your typical object-wrapping / SWIG-swilling 
extension writer.


> Jeremy
> 
>     [*] If we fix the type-class dichotomy, will it have any
>     effect on the stackful nature of some of these C calls?

Don't know. What will those calls look like <wink>?

- Gordon


From jeremy@alum.mit.edu  Tue Mar 13 18:30:37 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 13:30:37 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <20010313185501.A7459@planck.physik.uni-konstanz.de>
References: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
 <3AAE0FE3.2206.7AB85588@localhost>
 <20010313185501.A7459@planck.physik.uni-konstanz.de>
Message-ID: <15022.26445.896017.406266@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "BR" == Bernd Rinn <Bernd.Rinn@epost.de> writes:

  BR> On Tue, Mar 13, 2001 at 12:17:39PM -0500, Gordon McMillan wrote:
  >> The one instance I can find on the Stackless list (of attempting
  >> to use a continuation across interpreter invocations) was a call
  >> the uthread.wait() in __init__. Arguably a (minor) nuisance,
  >> arguably bad coding practice (even if it worked).

[explanation of code practice that lead to error omitted]

  BR> So I suspect that you might end up with a rule of thumb:

  BR> """ Don't use classes and libraries that use classes when doing
  BR> IO in microthreaded programs!  """

  BR> which might indeed be a problem. Am I overlooking something
  BR> fundamental here?

Thanks for asking this question in a clear and direct way.

A few other variations on the question come to mind:

    If a programmer uses a library implement via coroutines, can she
    call library methods from an __xxx__ method?

    Can coroutines or microthreads co-exist with callbacks invoked by
    C extensions? 

    Can a program do any microthread IO in an __call__ method?

If any of these are the sort "in theory" problems that the PEP alludes
to, then we need a full spec for what is and is not allowed.  It
doesn't make sense to tell programmers to follow unspecified
"reasonable" programming practices.

Jeremy


From ping@lfw.org  Tue Mar 13 18:44:37 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 13 Mar 2001 10:44:37 -0800 (PST)
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <20010313125418.A404@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10103131039260.13108-100000@skuld.kingmanhall.org>

On Tue, Mar 13, 2001 at 05:18:35AM -0500, Guido van Rossum wrote:
> I think the extent to which HWFP doesn't work for newbies is mostly
> related to the change we made in 2.0 where repr() (and hence the
> interactive prompt) show full precision, leading to annoyances like
> repr(1.1) == '1.1000000000000001'.

I'll argue now -- just as i argued back then, but louder! -- that
this isn't necessary.  repr(1.1) can be 1.1 without losing any precision.

Simply stated, you only need to display as many decimal places as are
necessary to regenerate the number.  So if x happens to be the
floating-point number closest to 1.1, then 1.1 is all you have to show.

By definition, if you type x = 1.1, x will get the floating-point
number closest in value to 1.1.  So x will print as 1.1.  And entering
1.1 will be sufficient to reproduce x exactly.

Thomas Wouters wrote:
> I suspect that the change in float.__repr__() did reduce the number of
> suprises over something like this, though: (taken from a 1.5.2 interpreter)
> 
> >>> x = 1.000000000001
> >>> x
> 1.0
> >>> x == 1.0
> 0

Stick in a

    warning: floating-point numbers should not be tested for equality

and that should help at least somewhat.

If you follow the rule i stated above, you would get this:

    >>> x = 1.1
    >>> x
    1.1
    >>> x == 1.1
    warning: floating-point numbers should not be tested for equality
    1
    >>> x = 1.000000000001
    >>> x
    1.0000000000010001
    >>> x == 1.000000000001
    warning: floating-point numbers should not be tested for equality
    1
    >>> x == 1.0
    warning: floating-point numbers should not be tested for equality
    0

All of this seems quite reasonable to me.



-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso



From skip@mojam.com (Skip Montanaro)  Tue Mar 13 19:48:15 2001
From: skip@mojam.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 13 Mar 2001 13:48:15 -0600 (CST)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <200103131643.LAA01072@cj20424-a.reston1.va.home.com>
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
 <15012.60277.150431.237935@beluga.mojam.com>
 <200103131643.LAA01072@cj20424-a.reston1.va.home.com>
Message-ID: <15022.31103.7828.938707@beluga.mojam.com>

    Guido> Let me annotate these in-line:

    ...

I just added all the names marked "yes".

Skip


From gmcm@hypernet.com  Tue Mar 13 20:02:14 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 15:02:14 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.26445.896017.406266@w221.z064000254.bwi-md.dsl.cnc.net>
References: <20010313185501.A7459@planck.physik.uni-konstanz.de>
Message-ID: <3AAE3676.13712.7B4F001D@localhost>

Can we please get the followups under control? Bernd sent 
me a private email. I replied privately. Then he forwarded to 
Stackless. So I forwarded my reply to Stackless. Now Jeremy 
adds python-dev to the mix.

> >>>>> "BR" == Bernd Rinn <Bernd.Rinn@epost.de> writes:
> 
>   BR> On Tue, Mar 13, 2001 at 12:17:39PM -0500, Gordon McMillan
>   wrote: >> The one instance I can find on the Stackless list (of
>   attempting >> to use a continuation across interpreter
>   invocations) was a call >> the uthread.wait() in __init__.
>   Arguably a (minor) nuisance, >> arguably bad coding practice
>   (even if it worked).
> 
> [explanation of code practice that lead to error omitted]
> 
>   BR> So I suspect that you might end up with a rule of thumb:
> 
>   BR> """ Don't use classes and libraries that use classes when
>   doing BR> IO in microthreaded programs!  """
> 
>   BR> which might indeed be a problem. Am I overlooking something
>   BR> fundamental here?

Synopsis of my reply: this is more a problem with uthreads 
than coroutines. In any (real) thread, you're limited to dealing 
with one non-blocking IO technique (eg, select) without going 
into a busy loop. If you're dedicating a (real) thread to select, it 
makes more sense to use coroutines than uthreads.

> A few other variations on the question come to mind:
> 
>     If a programmer uses a library implement via coroutines, can
>     she call library methods from an __xxx__ method?

Certain situations won't work, but you knew that.
 
>     Can coroutines or microthreads co-exist with callbacks
>     invoked by C extensions? 

Again, in certain situations it won't work. Again, you knew that.
 
>     Can a program do any microthread IO in an __call__ method?

Considering you know the answer to that one too, you could've 
phrased it as a parsable question.
 
> If any of these are the sort "in theory" problems that the PEP
> alludes to, then we need a full spec for what is and is not
> allowed.  It doesn't make sense to tell programmers to follow
> unspecified "reasonable" programming practices.

That's easy. In a nested invocation of the Python interpreter, 
you can't use a coroutine created in an outer interpreter. 

In the Python 2 documentation, there are 6 caveats listed in 
the thread module. That's a couple order of magnitudes 
different from the actual number of ways you can screw up 
using the thread module.

- Gordon


From jeremy@alum.mit.edu  Tue Mar 13 20:22:36 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 15:22:36 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE3676.13712.7B4F001D@localhost>
References: <20010313185501.A7459@planck.physik.uni-konstanz.de>
 <3AAE3676.13712.7B4F001D@localhost>
Message-ID: <15022.33164.673632.351851@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GMcM" == Gordon McMillan <gmcm@hypernet.com> writes:

  GMcM> Can we please get the followups under control? Bernd sent me a
  GMcM> private email. I replied privately. Then he forwarded to
  GMcM> Stackless. So I forwarded my reply to Stackless. Now Jeremy
  GMcM> adds python-dev to the mix.

I had no idea what was going on with forwards and the like.  It looks
like someone "bounced" messages, i.e. sent a message to me or a list
I'm on without including me or the list in the to or cc fields.  So I
couldn't tell how I received the message!  So I restored the original
recipients list of the thread (you, stackless, python-dev).

  >> >>>>> "BR" == Bernd Rinn <Bernd.Rinn@epost.de> writes:
  >> A few other variations on the question come to mind:
  >>
  >> If a programmer uses a library implement via coroutines, can she
  >> call library methods from an __xxx__ method?

  GMcM> Certain situations won't work, but you knew that.

I expected that some won't work, but no one seems willing to tell me
exactly which ones will and which ones won't.  Should the caveat in
the documentation say "avoid using certain __xxx__ methods" <0.9
wink>. 
 
  >> Can coroutines or microthreads co-exist with callbacks invoked by
  >> C extensions?

  GMcM> Again, in certain situations it won't work. Again, you knew
  GMcM> that.

Wasn't sure.
 
  >> Can a program do any microthread IO in an __call__ method?

  GMcM> Considering you know the answer to that one too, you could've
  GMcM> phrased it as a parsable question.

Do I know the answer?  I assume the answer is no, but I don't feel
very certain.
 
  >> If any of these are the sort "in theory" problems that the PEP
  >> alludes to, then we need a full spec for what is and is not
  >> allowed.  It doesn't make sense to tell programmers to follow
  >> unspecified "reasonable" programming practices.

  GMcM> That's easy. In a nested invocation of the Python interpreter,
  GMcM> you can't use a coroutine created in an outer interpreter.

Can we define these situations in a way that doesn't appeal to the
interpreter implementation?  If not, can we at least come up with a
list of what will and will not work at the python level?

  GMcM> In the Python 2 documentation, there are 6 caveats listed in
  GMcM> the thread module. That's a couple order of magnitudes
  GMcM> different from the actual number of ways you can screw up
  GMcM> using the thread module.

The caveats for the thread module seem like pretty minor stuff to me.
If you are writing a threaded application, don't expect code to
continue running after the main thread has exited.

The caveats for microthreads seems to cover a vast swath of territory:
The use of libraries or extension modules that involve callbacks or
instances with __xxx__ methods may lead to application failure.  I
worry about it becomes it doesn't sound very modular.  The use of
coroutines in one library means I can't use that library in certain
special cases in my own code.

I'm sorry if I sound grumpy, but I feel like I can't get a straight
answer despite several attempts.  At some level, it's fine to say that
there are some corner cases that won't work well with microthreads or
coroutines implemented on top of stackless python.  But I think the
PEP should discuss the details.  I've never written in an application
that uses stackless-based microthreads or coroutines so I don't feel
confident in my judgement of the situation.

Which gets back to Bernd's original question:

  GMcM> >   BR> """ Don't use classes and libraries that use classes when
  GMcM> >   BR> IO in microthreaded programs!  """
  GMcM> > 
  GMcM> >   BR> which might indeed be a problem. Am I overlooking something
  GMcM> >   BR> fundamental here?

and the synopsis of your answer:

  GMcM> Synopsis of my reply: this is more a problem with uthreads 
  GMcM> than coroutines. In any (real) thread, you're limited to dealing 
  GMcM> with one non-blocking IO technique (eg, select) without going 
  GMcM> into a busy loop. If you're dedicating a (real) thread to select, it 
  GMcM> makes more sense to use coroutines than uthreads.

I don't understand how this addresses the question, but perhaps I
haven't seen your reply yet.  Mail gets through to python-dev and
stackless at different rates.

Jeremy


From bckfnn@worldonline.dk  Tue Mar 13 20:34:17 2001
From: bckfnn@worldonline.dk (Finn Bock)
Date: Tue, 13 Mar 2001 20:34:17 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15021.24645.357064.856281@anthem.wooz.org>
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org>
Message-ID: <3aae83f7.41314216@smtp.worldonline.dk>

>    GvR> Yes, that was on the list once but got dropped.  You might
>    GvR> want to get together with Finn and Samuele to see what their
>    GvR> rules are.  (They allow the use of some keywords at least as
>    GvR> keyword=expression arguments and as object.attribute names.)

[Barry]

>I'm actually a little surprised that the "Jython vs. CPython"
>differences page doesn't describe this (or am I missing it?):

It is mentioned at the bottom of 

     http://www.jython.org/docs/usejava.html

>    http://www.jython.org/docs/differences.html
>
>I thought it used to.

I have now also added it to the difference page.

>IIRC, keywords were allowed if there was no question of it introducing
>a statement.  So yes, keywords were allowed after the dot in attribute
>lookups, and as keywords in argument lists, but not as variable names
>on the lhs of an assignment (I don't remember if they were legal on
>the rhs, but it seems like that ought to be okay, and is actually
>necessary if you allow them argument lists).

- after "def"
- after a dot "." in trailer
- after "import"
- after "from" (in an import stmt)
- and as keyword argument names in arglist

>It would eliminate much of the need for writing obfuscated code like
>"class_" or "klass".

Not the rules as Jython currently has it. Jython only allows the *use*
of external code which contain reserved words as class, method or
attribute names, including overriding such methods.

The distinction between the Name and AnyName grammar productions have
worked very well for us, but I don't think of it as a general "keywords
can be used as identifiers" feature.

regards,
finn


From barry@digicool.com  Tue Mar 13 20:44:04 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Tue, 13 Mar 2001 15:44:04 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl>
 <200103122332.SAA22948@cj20424-a.reston1.va.home.com>
 <15021.24645.357064.856281@anthem.wooz.org>
 <3aae83f7.41314216@smtp.worldonline.dk>
Message-ID: <15022.34452.183052.362184@anthem.wooz.org>

>>>>> "FB" == Finn Bock <bckfnn@worldonline.dk> writes:

    | - and as keyword argument names in arglist

I think this last one doesn't work:

-------------------- snip snip --------------------
Jython 2.0 on java1.3.0 (JIT: jitc)
Type "copyright", "credits" or "license" for more information.
>>> def foo(class=None): pass
Traceback (innermost last):
  (no code object) at line 0
  File "<console>", line 1
	def foo(class=None): pass
	        ^
SyntaxError: invalid syntax
>>> def foo(print=None): pass
Traceback (innermost last):
  (no code object) at line 0
  File "<console>", line 1
	def foo(print=None): pass
	        ^
SyntaxError: invalid syntax
-------------------- snip snip --------------------

-Barry


From akuchlin@mems-exchange.org  Tue Mar 13 21:33:31 2001
From: akuchlin@mems-exchange.org (Andrew Kuchling)
Date: Tue, 13 Mar 2001 16:33:31 -0500
Subject: [Python-Dev] Removing doc/howto on python.org
Message-ID: <E14cwQ7-0003q3-00@ute.cnri.reston.va.us>

Looking at a bug report Fred forwarded, I realized that after
py-howto.sourceforge.net was set up, www.python.org/doc/howto was
never changed to redirect to the SF site instead.  As of this
afternoon, that's now done; links on www.python.org have been updated,
and I've added the redirect.

Question: is it worth blowing away the doc/howto/ tree now, or should
it just be left there, inaccessible, until work on www.python.org
resumes?

--amk


From tismer@tismer.com  Tue Mar 13 22:44:22 2001
From: tismer@tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 23:44:22 +0100
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
References: <200103131447.HAA32016@localhost.localdomain>
 <3AAE38C3.2C9BAA08@tismer.com> <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAEA2C6.7F1DD2CE@tismer.com>


Jeremy Hylton wrote:
> 
> >>>>> "CT" == Christian Tismer <tismer@tismer.com> writes:
> 
>   CT> Maybe I'm repeating myself, but I'd like to clarify: I do not
>   CT> plan to introduce anything that forces anybody to change her
>   CT> code. This is all about extending the current capabilities.
> 
> The problem with this position is that C code that uses the old APIs
> interferes in odd ways with features that depend on stackless,
> e.g. the __xxx__ methods.[*]  If the old APIs work but are not
> compatible, we'll end up having to rewrite all our extensions so that
> they play nicely with stackless.

My idea was to keep all interfaces as they are, add a stackless flag,
and add stackless versions of all those calls. These are used when
they exist. If not, the old, recursive calls are used. If we can
find such a flag, we're fine. If not, we're hosed.
There is no point in forcing everybody to play nicely with Stackless.

> If we change the core and standard extensions to use stackless
> interfaces, then this style will become the standard style.  If the
> interface is simple, this is no problem.  If the interface is complex,
> it may be a problem.  My point is that if we change the core APIs, we
> place a new burden on extension writers.

My point is that if we extend the core APIs, we do not place
a burden on extension writers, given that we can do the extension
in a transparent way.

> Jeremy
> 
>     [*] If we fix the type-class dichotomy, will it have any effect on
>     the stackful nature of some of these C calls?

I truely cannot answer this one.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From gmcm@hypernet.com  Tue Mar 13 22:16:24 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 17:16:24 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.33164.673632.351851@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE3676.13712.7B4F001D@localhost>
Message-ID: <3AAE55E8.4865.7BC9D6B2@localhost>

[Jeremy]
>   >> If a programmer uses a library implement via coroutines, can
>   she >> call library methods from an __xxx__ method?
> 
>   GMcM> Certain situations won't work, but you knew that.
> 
> I expected that some won't work, but no one seems willing to tell
> me exactly which ones will and which ones won't.  Should the
> caveat in the documentation say "avoid using certain __xxx__
> methods" <0.9 wink>. 

Within an __xxx__ method, you cannot *use* a coroutine not 
created in that method. That is true in current Stackless and 
will be true in Stack-lite. The presence of "library" in the 
question is a distraction.

I guess if you think of a coroutine as just another kind of 
callable object, this looks like a strong limitation. But you 
don't find yourself thinking of threads as plain old callable 
objects, do you? In a threaded program, no matter how 
carefully designed, there is a lot of thread detritus lying 
around. If you don't stay concious of the transfers of control 
that may happen, you will screw up.

Despite the limitation on using coroutines in magic methods, 
coroutines have an advantage in that tranfers of control only 
happen when you want them to. So avoiding unwanted 
transfers of control is vastly easier.
 
>   >> Can coroutines or microthreads co-exist with callbacks
>   invoked by >> C extensions?
> 
>   GMcM> Again, in certain situations it won't work. Again, you
>   knew GMcM> that.
> 
> Wasn't sure.

It's exactly the same situation.
 
>   >> Can a program do any microthread IO in an __call__ method?
> 
>   GMcM> Considering you know the answer to that one too, you
>   could've GMcM> phrased it as a parsable question.
> 
> Do I know the answer?  I assume the answer is no, but I don't
> feel very certain.

What is "microthreaded IO"? Probably the attempt to yield 
control if the IO operation would block. Would doing that 
inside __call__ work with microthreads? No. 

It's not my decision over whether this particular situation 
needs to be documented. Somtime between the 2nd and 5th 
times the programmer encounters this exception, they'll say 
"Oh phooey, I can't do this in __call__, I need an explicit 
method instead."  Python has never claimed that __xxx__ 
methods are safe as milk. Quite the contrary.

 
>   >> If any of these are the sort "in theory" problems that the
>   PEP >> alludes to, then we need a full spec for what is and is
>   not >> allowed.  It doesn't make sense to tell programmers to
>   follow >> unspecified "reasonable" programming practices.
> 
>   GMcM> That's easy. In a nested invocation of the Python
>   interpreter, GMcM> you can't use a coroutine created in an
>   outer interpreter.
> 
> Can we define these situations in a way that doesn't appeal to
> the interpreter implementation? 

No, because it's implementation dependent.

> If not, can we at least come up
> with a list of what will and will not work at the python level?

Does Python attempt to catalogue all the ways you can screw 
up using magic methods? Using threads? How 'bout the 
metaclass hook? Even stronger, do we catalogue all the ways 
that an end-user-programmer can get bit by using a library 
written by someone else that makes use of these facilities?
 
>   GMcM> In the Python 2 documentation, there are 6 caveats listed
>   in GMcM> the thread module. That's a couple order of magnitudes
>   GMcM> different from the actual number of ways you can screw up
>   GMcM> using the thread module.
> 
> The caveats for the thread module seem like pretty minor stuff to
> me. If you are writing a threaded application, don't expect code
> to continue running after the main thread has exited.

Well, the thread caveats don't mention the consequences of 
starting and running a thread within an __init__ method.  

> The caveats for microthreads seems to cover a vast swath of
> territory: The use of libraries or extension modules that involve
> callbacks or instances with __xxx__ methods may lead to
> application failure. 

While your statement is true on the face of it, it is very 
misleading. Things will only fall apart when you code an 
__xxx__ method or callback that uses a pre-existing coroutine 
(or does a uthread swap). You can very easily get in trouble 
right now with threads and callbacks. But the real point is that 
it is *you* the programmer trying to do something that won't 
work (and, BTW, getting notified right away), not some library 
pulling a fast one on you. (Yes, the library could make things 
very hard for you, but that's nothing new.)

Application programmers do not need magic methods. Ever. 
They are very handy for people creating libraries for application 
programmers to use, but we already presume (naively) that 
these people know what they're doing.

> I worry about it becomes it doesn't sound
> very modular.  The use of coroutines in one library means I can't
> use that library in certain special cases in my own code.

With a little familiarity, you'll find that coroutines are a good 
deal more modular than threads.

In order for that library to violate your expectations, that library 
must be concious of multiple coroutines (otherwise, it's just a 
plain stackfull call / return). It must have kept a coroutine from 
some other call, or had you pass one in. So you (if at all 
cluefull <wink>) will be concious that something is going on 
here.

The issue is the same as if you used a framework which used 
real threads, but never documented anything about the 
threads. You code callbacks that naively and independently 
mutate a global collection. Do you blame Python?

> I'm sorry if I sound grumpy, but I feel like I can't get a
> straight answer despite several attempts.  At some level, it's
> fine to say that there are some corner cases that won't work well
> with microthreads or coroutines implemented on top of stackless
> python.  But I think the PEP should discuss the details.  I've
> never written in an application that uses stackless-based
> microthreads or coroutines so I don't feel confident in my
> judgement of the situation.

And where on the fearful to confident scale was the Jeremy 
just getting introduced to threads?
 
> Which gets back to Bernd's original question:
> 
>   GMcM> >   BR> """ Don't use classes and libraries that use
>   classes when GMcM> >   BR> IO in microthreaded programs!  """
>   GMcM> > GMcM> >   BR> which might indeed be a problem. Am I
>   overlooking something GMcM> >   BR> fundamental here?
> 
> and the synopsis of your answer:
> 
>   GMcM> Synopsis of my reply: this is more a problem with
>   uthreads GMcM> than coroutines. In any (real) thread, you're
>   limited to dealing GMcM> with one non-blocking IO technique
>   (eg, select) without going GMcM> into a busy loop. If you're
>   dedicating a (real) thread to select, it GMcM> makes more sense
>   to use coroutines than uthreads.
> 
> I don't understand how this addresses the question, but perhaps I
> haven't seen your reply yet.  Mail gets through to python-dev and
> stackless at different rates.

Coroutines only swap voluntarily. It's very obvious where these 
transfers of control take place hence simple to control when 
they take place. My suspicion is that most people use 
uthreads because they use a familiar model. Not many people 
are used to coroutines, but many situations would be more 
profitably approached with coroutines than uthreads.

- Gordon


From fredrik@pythonware.com  Wed Mar 14 00:28:20 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 01:28:20 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org>
Message-ID: <000b01c0ac1d$ad79bec0$e46940d5@hagrid>

barry wrote:
>
>    | - and as keyword argument names in arglist
>
> I think this last one doesn't work:
> 
> -------------------- snip snip --------------------
> Jython 2.0 on java1.3.0 (JIT: jitc)
> Type "copyright", "credits" or "license" for more information.
> >>> def foo(class=None): pass
> Traceback (innermost last):
>   (no code object) at line 0
>   File "<console>", line 1
> def foo(class=None): pass
>         ^
> SyntaxError: invalid syntax
> >>> def foo(print=None): pass
> Traceback (innermost last):
>   (no code object) at line 0
>   File "<console>", line 1
> def foo(print=None): pass
>         ^
> SyntaxError: invalid syntax
> -------------------- snip snip --------------------

>>> def spam(**kw):
...     print kw
...
>>> spam(class=1)
{'class': 1}
>>> spam(print=1)
{'print': 1}

Cheers /F



From guido@digicool.com  Wed Mar 14 00:55:54 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 19:55:54 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: Your message of "Tue, 13 Mar 2001 17:16:24 EST."
 <3AAE55E8.4865.7BC9D6B2@localhost>
References: <3AAE3676.13712.7B4F001D@localhost>
 <3AAE55E8.4865.7BC9D6B2@localhost>
Message-ID: <200103140055.TAA02495@cj20424-a.reston1.va.home.com>

I've been following this discussion anxiously.  There's one
application of stackless where I think the restrictions *do* come into
play.  Gordon wrote a nice socket demo where multiple coroutines or
uthreads were scheduled by a single scheduler that did a select() on
all open sockets.  I would think that if you use this a lot, e.g. for
all your socket I/O, you might get in trouble sometimes when you
initiate a socket operation from within e.g. __init__ but find you
have to complete it later.

How realistic is this danger?  How serious is this demo?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From greg@cosc.canterbury.ac.nz  Wed Mar 14 01:28:49 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Mar 2001 14:28:49 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE0FE3.2206.7AB85588@localhost>
Message-ID: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>

Gordon McMillan <gmcm@hypernet.com>:

> But magic methods are a convenience. There's 
> absolutely nothing there that can't be done another way.

Strictly speaking that's true, but from a practical standpoint
I think you will *have* to address __init__ at least, because
it is so ubiquitous and ingrained in the Python programmer's
psyche. Asking Python programmers to give up using __init__
methods will be greeted with about as much enthusiasm as if
you asked them to give up using all identifiers containing
the leter 'e'. :-)

>  - a GUI. Again, no big deal

Sorry, but I think it *is* a significantly large deal...

> be careful that the other threads don't 
> touch the GUI directly. It's basically the same issue with 
> Stackless.

But the other threads don't have to touch the GUI directly
to be a problem.

Suppose I'm building an IDE and I want a button which spawns
a microthread to execute the user's code. The thread doesn't
make any GUI calls itself, but it's spawned from inside a
callback, which, if I understand correctly, will be impossible.

> The one comparable situation 
> in normal Python is crossing threads in callbacks. With the 
> exception of a couple of complete madmen (doing COM 
> support), everyone else learns to avoid the situation.

But if you can't even *start* a thread using a callback,
how do you do anything with threads at all?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From gmcm@hypernet.com  Wed Mar 14 02:22:44 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 21:22:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140055.TAA02495@cj20424-a.reston1.va.home.com>
References: Your message of "Tue, 13 Mar 2001 17:16:24 EST."             <3AAE55E8.4865.7BC9D6B2@localhost>
Message-ID: <3AAE8FA4.31567.7CAB5C89@localhost>

[Guido]
> I've been following this discussion anxiously.  There's one
> application of stackless where I think the restrictions *do* come
> into play.  Gordon wrote a nice socket demo where multiple
> coroutines or uthreads were scheduled by a single scheduler that
> did a select() on all open sockets.  I would think that if you
> use this a lot, e.g. for all your socket I/O, you might get in
> trouble sometimes when you initiate a socket operation from
> within e.g. __init__ but find you have to complete it later.

Exactly as hard as it is not to run() a thread from within the 
Thread __init__. Most threaders have probably long forgotten 
that they tried that -- once.

> How realistic is this danger?  How serious is this demo?

It's not a demo. It's in use (proprietary code layered on top of 
SelectDispatcher which is open) as part of a service a major 
player in the video editting industry has recently launched, 
both on the client and server side. Anyone in that industry can 
probably figure out who and (if they read the trades) maybe 
even what from the above, but I'm not comfortable saying more 
publicly.

- Gordon


From gmcm@hypernet.com  Wed Mar 14 02:55:44 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 21:55:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>
References: <3AAE0FE3.2206.7AB85588@localhost>
Message-ID: <3AAE9760.19887.7CC991FF@localhost>

Greg Ewing wrote:

> Gordon McMillan <gmcm@hypernet.com>:
> 
> > But magic methods are a convenience. There's 
> > absolutely nothing there that can't be done another way.
> 
> Strictly speaking that's true, but from a practical standpoint I
> think you will *have* to address __init__ at least, because it is
> so ubiquitous and ingrained in the Python programmer's psyche.
> Asking Python programmers to give up using __init__ methods will
> be greeted with about as much enthusiasm as if you asked them to
> give up using all identifiers containing the leter 'e'. :-)

No one's asking them to give up __init__. Just asking them 
not to transfer control from inside an __init__. There are good 
reasons not to transfer control to another thread from within an 
__init__, too.
 
> >  - a GUI. Again, no big deal
> 
> Sorry, but I think it *is* a significantly large deal...
> 
> > be careful that the other threads don't 
> > touch the GUI directly. It's basically the same issue with
> > Stackless.
> 
> But the other threads don't have to touch the GUI directly
> to be a problem.
> 
> Suppose I'm building an IDE and I want a button which spawns a
> microthread to execute the user's code. The thread doesn't make
> any GUI calls itself, but it's spawned from inside a callback,
> which, if I understand correctly, will be impossible.

For a uthread, if it swaps out, yes, because that's an attempt 
to transfer to another uthread not spawned by the callback. So 
you will get an exception if you try it. If you simply want to 
create and use coroutines from within the callback, that's fine 
(though not terribly useful, since the GUI is blocked till you're 
done).
 
> > The one comparable situation 
> > in normal Python is crossing threads in callbacks. With the
> > exception of a couple of complete madmen (doing COM support),
> > everyone else learns to avoid the situation.
> 
> But if you can't even *start* a thread using a callback,
> how do you do anything with threads at all?

Checking the couple GUIs I've done that use threads (mostly I 
use idletasks in a GUI for background stuff) I notice I create 
the threads before starting the GUI. So in this case, I'd 
probably have a worker thread (real) and the GUI thread (real). 
The callback would queue up some work for the worker thread 
and return. The worker thread can use continuations or 
uthreads all it wants.

My comments about GUIs were basically saying that you 
*have* to think about this stuff when you design a GUI - they 
all have rather strong opinions about how you app should be 
architected. You can get into trouble with any of the 
techniques (events, threads, idletasks...) they promote / allow 
/ use. I know it's gotten better, but not very long ago you had 
to be very careful simply to get TK and threads to coexist.

I usually use idle tasks precisely because the chore of 
breaking my task into 0.1 sec chunks is usually less onerous 
than trying to get the GUI to let me do it some other way.

[Now I'll get floods of emails telling me *this* GUI lets me do it 
*that* way...  As far as I'm concerned, "least worst" is all any 
GUI can aspire to.]

- Gordon


From tim.one@home.com  Wed Mar 14 03:04:31 2001
From: tim.one@home.com (Tim Peters)
Date: Tue, 13 Mar 2001 22:04:31 -0500
Subject: [Python-Dev] comments on PEP 219
In-Reply-To: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIHJFAA.tim.one@home.com>

[Jeremy Hylton]
> ...
> One other set of issues, that is sort-of out of bounds for this
> particular PEP, is what control features do we want that can only be
> implemented with stackless.  Can we implement generators or coroutines
> efficiently without a stackless approach?

Icon/CLU-style generator/iterators always return/suspend directly to their
immediate caller/resumer, so it's impossible to get a C stack frame "stuck in
the middle":  whenever they're ready to yield (suspend or return), there's
never anything between them and the context that gave them control  (and
whether the context was coded in C or Python -- generators don't care).

While Icon/CLU do not do so, a generator/iterator in this sense can be a
self-contained object, passed around and resumed by anyone who feels like it;
this kind of object is little more than a single Python execution frame,
popped from the Python stack upon suspension and pushed back on upon
resumption.  For this reason, recursive interpreter calls don't bother it:
whenever it stops or pauses, it's at the tip of the current thread of
control, and returns control to "the next" frame, just like a vanilla
function return.  So if the stack is a linear list in the absence of
generators, it remains so in their presence.  It also follows that it's fine
to resume a generator by making a recursive call into the interpreter (the
resumption sequence differs from a function call in that it must set up the
guts of the eval loop from the state saved in the generator's execution
frame, rather than create a new execution frame).

But Guido usually has in mind a much fancier form of generator (note:  contra
PEP 219, I didn't write generator.py -- Guido wrote that after hearing me say
"generator" and falling for Majewski's hypergeneralization of the concept
<0.8 wink>), which can suspend to *any* routine "up the chain".  Then C stack
frames can certainly get stuck in the middle, and so that style of generator
is much harder to implement given the way the interpreter currently works.
In Icon *this* style of "generator" is almost never used, in part because it
requires using Icon's optional "co-expression" facilities (which are optional
because they require hairy platform-dependent assembler to trick the platform
C into supporting multiple stacks; Icon's generators don't need any of that).
CLU has nothing like it.

Ditto for coroutines.



From skip@pobox.com (Skip Montanaro)  Wed Mar 14 03:12:02 2001
From: skip@pobox.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 13 Mar 2001 21:12:02 -0600 (CST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9760.19887.7CC991FF@localhost>
References: <3AAE0FE3.2206.7AB85588@localhost>
 <3AAE9760.19887.7CC991FF@localhost>
Message-ID: <15022.57730.265706.483989@beluga.mojam.com>

>>>>> "Gordon" == Gordon McMillan <gmcm@hypernet.com> writes:

    Gordon> No one's asking them to give up __init__. Just asking them not
    Gordon> to transfer control from inside an __init__. There are good
    Gordon> reasons not to transfer control to another thread from within an
    Gordon> __init__, too.
 
Is this same restriction placed on all "magic" methods like __getitem__?  Is
this the semantic difference between Stackless and CPython that people are
getting all in a lather about?

Skip





From gmcm@hypernet.com  Wed Mar 14 03:25:03 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 22:25:03 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.57730.265706.483989@beluga.mojam.com>
References: <3AAE9760.19887.7CC991FF@localhost>
Message-ID: <3AAE9E3F.9635.7CE46C9C@localhost>

> >>>>> "Gordon" == Gordon McMillan <gmcm@hypernet.com> writes:
> 
>     Gordon> No one's asking them to give up __init__. Just asking
>     them not Gordon> to transfer control from inside an __init__.
>     There are good Gordon> reasons not to transfer control to
>     another thread from within an Gordon> __init__, too.
> 
> Is this same restriction placed on all "magic" methods like
> __getitem__?  

In the absence of making them interpreter-recursion free, yes.

> Is this the semantic difference between Stackless
> and CPython that people are getting all in a lather about?

What semantic difference? You can't transfer control to a 
coroutine / urthread in a magic method in CPython, either 
<wink>.

- Gordon


From jeremy@alum.mit.edu  Wed Mar 14 01:17:39 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 20:17:39 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9E3F.9635.7CE46C9C@localhost>
References: <3AAE9760.19887.7CC991FF@localhost>
 <3AAE9E3F.9635.7CE46C9C@localhost>
Message-ID: <15022.50867.210827.597710@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GMcM" == Gordon McMillan <gmcm@hypernet.com> writes:

  >> Is this the semantic difference between Stackless and CPython
  >> that people are getting all in a lather about?

  GMcM> What semantic difference? You can't transfer control to a
  GMcM> coroutine / urthread in a magic method in CPython, either
  GMcM> <wink>.

If I have a library or class that uses threads under the covers, I can
create the threads in whatever code block I want, regardless of what
is on the call stack above the block.  The reason that coroutines /
uthreads are different is that the semantics of control transfers are
tied to what the call stack looks like a) when the thread is created
and b) when a control transfer is attempted.

This restriction seems quite at odds with modularity.  (Could I import
a module that creates a thread within an __init__ method?)  The
correctness of a library or class depends on the entire call chain
involved in its use.

It's not at all modular, because a programmer could make a local
decision about organizing a particular module and cause errors in a
module that don't even use directly.  This would occur if module A
uses uthreads, module B is a client of module A, and the user writes a
program that uses module B.  He unsuspectingly adds a call to module A
in an __init__ method and *boom*.

Jeremy

"Python is a language in which the use of uthreads in a module you
didn't know existed can render your own program unusable."  <wink>


From greg@cosc.canterbury.ac.nz  Wed Mar 14 05:09:42 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Mar 2001 18:09:42 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9760.19887.7CC991FF@localhost>
Message-ID: <200103140509.SAA05205@s454.cosc.canterbury.ac.nz>

> I'd probably have a worker thread (real) and the GUI thread (real). 

If I have to use real threads to get my uthreads to work
properly, there doesn't seem to be much point in using
uthreads to begin with.

> you *have* to think about this stuff when you design a GUI...
> You can get into trouble with any of the techniques...
> not very long ago you had to be very careful simply to get 
> TK and threads to coexist.

Microthreads should *free* one from all that nonsense. They
should be simple, straightforward, easy to use, and bulletproof.
Instead it seems they're going to be just as tricky to use
properly, only in different ways.

Oh, well, perhaps I'll take another look after a few more
releases and see if anything has improved.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim.one@home.com  Wed Mar 14 05:34:11 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 14 Mar 2001 00:34:11 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEIMJFAA.tim.one@home.com>

[Paul Prescod]
> David Ascher suggested during the talk that comparisons of floats could
> raise a warning unless you turned that warning off (which only
> knowledgable people would do). I think that would go a long way to
> helping them find and deal with serious floating point inaccuracies in
> their code.

It would go a very short way -- but that may be better than nothing.  Most fp
disasters have to do with "catastrophic cancellation" (a tech term, not a
pejorative), and comparisons have nothing to do with those.  Alas, CC can't
be detected automatically short of implementing interval arithmetic, and even
then tends to raise way too many false alarms unless used in algorithms
designed specifically to exploit interval arithmetic.

[Guido]
> You mean only for == and !=, right?

You have to do all comparisons or none (see below), but in the former case a
warning is silly (groundless paranoia) *unless* the comparands are "close".

Before we boosted repr(float) precision so that people could *see* right off
that they didn't understand Python fp arithmetic, complaints came later.  For
example, I've lost track of how many times I've explained variants of this
one:

Q: How come this loop goes around 11 times?

>>> delta = 0.1
>>> x = 0.0
>>> while x < 1.0:   # no == or != here
...     print x
...     x = x + delta
...

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
>>>

A: It's because 0.1 is not exactly representable in binary floating-point.

Just once out of all those times, someone came back several days later after
spending many hours struggling to understand what that really meant and
implied.  Their followup question was depressingly insightful:

Q. OK, I understand now that for 754 doubles, the closest possible
   approximation to one tenth is actually a little bit *larger* than
   0.1.  So how come when I add a thing *bigger* than one tenth together
   ten times, I get a result *smaller* than one?

the-fun-never-ends-ly y'rs  - tim



From tim.one@home.com  Wed Mar 14 06:01:24 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 14 Mar 2001 01:01:24 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <Pine.LNX.4.10.10103131039260.13108-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIOJFAA.tim.one@home.com>

[Ka-Ping Yee]
> I'll argue now -- just as i argued back then, but louder! -- that
> this isn't necessary.  repr(1.1) can be 1.1 without losing any precision.
>
> Simply stated, you only need to display as many decimal places as are
> necessary to regenerate the number.  So if x happens to be the
> floating-point number closest to 1.1, then 1.1 is all you have to show.
>
> By definition, if you type x = 1.1, x will get the floating-point
> number closest in value to 1.1.

This claim is simply false unless the platform string->float routines do
proper rounding, and that's more demanding than even the anal 754 std
requires (because in the general case proper rounding requires bigint
arithmetic).

> So x will print as 1.1.

By magic <0.1 wink>?

This *can* work, but only if Python does float<->string conversions itself,
leaving the platform libc out of it.  I gave references to directly relevant
papers, and to David Gay's NETLIB implementation code, the last time we went
thru this.  Note that Gay's code bristles with platform #ifdef's, because
there is no portable way in C89 to get the bit-level info this requires.
It's some of the most excruciatingly delicate code I've ever plowed thru.  If
you want to submit it as a patch, I expect Guido will require a promise in
blood that he'll never have to maintain it <wink>.

BTW, Scheme implementations are required to do proper rounding in both
string<->float directions, and minimal-length (wrt idempotence) float->string
conversions (provided that a given Scheme supports floats at all).  That was
in fact the original inspiration for Clinger, Steele and White's work in this
area.  It's exactly what you want too (because it's exactly what you need to
make your earlier claims true).  A more recent paper by Dybvig and ??? (can't
remember now) builds on the earlier work, using Gay's code by reference as a
subroutine, and speeding some of the other cases where Gay's code is slothful
by a factor of about 70.

scheme-does-a-better-job-on-numerics-in-many-respects-ly y'rs  - tim



From tim.one@home.com  Wed Mar 14 06:21:57 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 14 Mar 2001 01:21:57 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140509.SAA05205@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEIPJFAA.tim.one@home.com>

[Greg Ewing]
> If I have to use real threads to get my uthreads to work
> properly, there doesn't seem to be much point in using
> uthreads to begin with.
> ...
> Microthreads should *free* one from all that nonsense. They
> should be simple, straightforward, easy to use, and bulletproof.
> Instead it seems they're going to be just as tricky to use
> properly, only in different ways.

Stackless uthreads don't exist to free you from nonsense, they exist because
they're much lighter than OS-level threads.  You can have many more of them
and context switching is much quicker.  Part of the price is that they're not
as flexible as OS-level threads:  because they get no support at all from the
OS, they have no way to deal with the way C (or any other language) uses the
HW stack (from where most of the odd-sounding restrictions derive).

One thing that impressed me at the Python Conference last week was how many
of the talks I attended presented work that relied on, or was in the process
of moving to, Stackless.  This stuff has *very* enthused users!  Unsure how
many rely on uthreads vs how many on coroutines (Stackless wasn't the focus
of any these talks), but they're the same deal wrt restrictions.

BTW, I don't know of a coroutine facility in any x-platform language that
plays nicely (in the sense of not imposing mounds of implementation-derived
restrictions) across foreign-language boundaries.  If you do, let's get a
reference so we can rip off their secrets.

uthreads-are-much-easier-to-provide-in-an-os-than-in-a-language-ly
    y'rs  - tim



From tim.one@home.com  Wed Mar 14 07:27:21 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 14 Mar 2001 02:27:21 -0500
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <200103131532.f2DFWpw04691@snark.thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>

[Eric S. Raymond]
> I bit the bullet and hand-rolled a recursive-descent expression parser
> for CML2 to replace the Earley-algorithm parser described in my
> previous note.  It is a little more than twice as fast as the SPARK
> code, cutting the CML2 compiler runtime almost exactly in half.
>
> Sigh.  I had been intending to recommend SPARK for the Python standard
> library -- as I pointed out in my PC9 paper, it would be the last
> piece stock Python needs to be an effective workbench for
> minilanguage construction.  Unfortunately I'm now convinced Paul
> Prescod is right and it's too slow for production use, at least at
> version 0.6.1.

If all you got out of crafting a one-grammar parser by hand is a measly
factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
parser generators for restricted grammars, in C).  For the all-purpose Earley
parser to get that close is really quite an accomplishment!  SPARK was
written primarily for rapid prototyping, at which it excels (how many times
did you change your grammar during development?  how much longer would it
have taken you to adjust had you needed to rework your RD parser each time?).

perhaps-you're-just-praising-it-via-faint-damnation<wink>-ly y'rs  - tim



From fredrik@pythonware.com  Wed Mar 14 08:25:19 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 09:25:19 +0100
Subject: [Python-Dev] CML2 compiler speedup
References: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>
Message-ID: <014401c0ac60$4f0b1c60$e46940d5@hagrid>

tim wrote:
> If all you got out of crafting a one-grammar parser by hand is a measly
> factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> parser generators for restricted grammars, in C).

talking about performance, has anyone played with using SRE's
lastindex/lastgroup stuff with SPARK?

(is there anything else I could do in SRE to make SPARK run faster?)

Cheers /F



From tismer@tismer.com  Wed Mar 14 09:19:44 2001
From: tismer@tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 10:19:44 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <3AAE0FE3.2206.7AB85588@localhost>
 <3AAE9760.19887.7CC991FF@localhost> <15022.57730.265706.483989@beluga.mojam.com>
Message-ID: <3AAF37B0.DFCC027A@tismer.com>


Skip Montanaro wrote:
> 
> >>>>> "Gordon" == Gordon McMillan <gmcm@hypernet.com> writes:
> 
>     Gordon> No one's asking them to give up __init__. Just asking them not
>     Gordon> to transfer control from inside an __init__. There are good
>     Gordon> reasons not to transfer control to another thread from within an
>     Gordon> __init__, too.
> 
> Is this same restriction placed on all "magic" methods like __getitem__?  Is
> this the semantic difference between Stackless and CPython that people are
> getting all in a lather about?

Yes, at the moment all __xxx__ stuff.
The semantic difference is at a different location:
Normal function calls are free to switch around. That is the
big advantage over CPython, which might be called a semantic
difference.
The behavior/contraints of __xxx__ has not changed yet, here
both Pythons are exactly the same! :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From tismer@tismer.com  Wed Mar 14 09:39:17 2001
From: tismer@tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 10:39:17 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>
Message-ID: <3AAF3C45.1972981F@tismer.com>


Greg Ewing wrote:

<snip>

> Suppose I'm building an IDE and I want a button which spawns
> a microthread to execute the user's code. The thread doesn't
> make any GUI calls itself, but it's spawned from inside a
> callback, which, if I understand correctly, will be impossible.

This doesn't need to be a problem with Microthreads.
Your IDE can spawn a new process at any time. The
process will simply not be started until the interpreter recursion is
done. I think this is exactly what we want.
Similarily the __init__ situation: Usually you want
to create a new process, but you don't care when it
is scheduled, finally.

So, the only remaining restriction is: If you *force* the
system to schedule microthreads in a recursive call, then
you will be biten by the first uthread that returns to
a frame which has been locked by a different interpreter.

It is pretty fine to create uthreads or coroutines in
the context of __init__. Stackless of course allows
to re-use frames that have been in any recursion. The
point is: After a recrusive interpreter is gone, there
is no problem to use its frames.
We just need to avoid to make __init__ the working
horse, which is bad style, anyway.

> > The one comparable situation
> > in normal Python is crossing threads in callbacks. With the
> > exception of a couple of complete madmen (doing COM
> > support), everyone else learns to avoid the situation.
> 
> But if you can't even *start* a thread using a callback,
> how do you do anything with threads at all?

You can *create* a thread using a callback. It will be started
after the callback is gone. That's sufficient in most cases.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From tim.one@home.com  Wed Mar 14 11:02:12 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 14 Mar 2001 06:02:12 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJIJFAA.tim.one@home.com>

[Guido]
> I'd like to argue about that.  I think the extent to which HWFP
> doesn't work for newbies is mostly related to the change we made in
> 2.0 where repr() (and hence the interactive prompt) show full
> precision, leading to annoyances like repr(1.1) == '1.1000000000000001'.
>
> I've noticed that the number of complaints I see about this went way
> up after 2.0 was released.

Indeed yes, but I think that's a *good* thing.  We can't stop people from
complaining, but we can influence *what* they complain about, and it's
essential for newbies to learn ASAP that they have no idea how binary fp
arithmetic works.  Note that I spend a lot more of my life replying to these
complaints than you <wink>, and I can cut virtually all of them off early now
by pointing to the RepresentationError wiki page.  Before, it was an endless
sequence of "unique" complaints about assorted things that "didn't work
right", and that was much more time-consuming for me.  Of course, it's not a
positive help to the newbies so much as that scaring them early saves them
greater troubles later <no wink>.

Regular c.l.py posters can (& do!) handle this now too, thanks to hearing the
*same* complaint repeatedly now.  For example, over the past two days there
have been more than 40 messages on c.l.py about this, none of them stemming
from the conference or Moshe's PEP, and none of them written by me.  It's a
pattern:

+ A newcomer to Python complains about the interactive-prompt fp display.

+ People quickly uncover that's the least of their problems (that, e.g., they
truly *believe* Python should get dollars and cents exactly right all by
itself, and are programming as if that were true).

+ The fp display is the easiest of all fp surprises to explain fully and
truthfully (although the wiki page should make painfully clear that "easiest"
!= "easy" by a long shot), so is the quickest route toward disabusing them of
their illusions.

+ A few people suggest they use my FixedPoint.py instead; a few more that
they compute using cents instead (using ints or longs); and there's always
some joker who flames that if they're writing code for clients and have such
a poor grasp of fp reality, they should be sued for "technical incompetence".

Except for the flames, this is good in my eyes.

> I expect that most newbies don't use floating point in a fancy way,
> and would never notice it if it was slightly off as long as the output
> was rounded like it was before 2.0.

I couldn't disagree more that ignorance is to be encouraged, either in
newbies or in experts.  Computational numerics is a difficult field with
major consequences in real life, and if the language can't actively *help*
people with that, it should at least avoid encouraging a fool's confidence in
their folly.  If that isn't virulent enough for you <wink>, read Kahan's
recent "Marketing versus Mathematics" rant, here:

    http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf

A point he makes over and over, illustrated with examples, is this:

    Decimal displays of Binary nonintegers cannot always be WYSIWYG.

    Trying to pretend otherwise afflicts both customers and
    implementors with bugs that go mostly misdiagnosed, so “fixing”
    one bug merely spawns others. …

In a specific example of a nasty real-life bug beginning on page 13, he calls
the conceit (& source of the bug) of rounding fp displays to 15 digits
instead of 17 "a pious fraud".  And he's right.  It spares the implementer
some shallow complaints at the cost of leading naive users down a garden
path, where they end up deeper and deeper in weeds over their heads.

Of course he acknowledges that 17-digit display "[annoys] users who expected
roundoff to degrade only the last displayed digit of simple expressions, and
[confuses] users who did not expect roundoff at all" -- but seeking to fuzz
those truths has worse consequences.

In the end, he smacks up against the same need to favor one group at the
expense of the other:

   Binary floating-point is best for mathematicians, engineers and most
   scientists, and for integers that never get rounded off.  For everyone
   else Decimal floating-point is best because it is the only way What
   You See can be What You Get, which is a big step towards reducing
   programming languages’ capture cross-section for programming errors.

He's wrong via omission about the latter, though:  rationals are also a way
to achieve that (so long as you stick to + - * /; decimal fp is still
arguably better once a sqrt or transcendental gets into the pot).

>> Presumably ABC used rationals because usability studies showed
>> they worked best (or didn't they test this?).

> No, I think at best the usability studies showed that floating point
> had problems that the ABC authors weren't able to clearly explain to
> newbies.  There was never an experiment comparing FP to rationals.

>> Presumably the TeachScheme! dialect of Scheme uses rationals for
>> the same reason.

> Probably for the same reasons.

Well, you cannot explain binary fp *clearly* to newbies in reasonable time,
so I can't fault any teacher or newbie-friendly language for running away
from it.  Heck, most college-age newbies are still partly naive about fp
numerics after a good one-semester numerical analysis course (voice of
experience, there).

>> 1/10 and 0.1 are indeed very different beasts to me).

> Another hard question: does that mean that 1 and 1.0 are also very
> different beasts to you?  They weren't to the Alice users who started
> this by expecting 1/4 to represent a quarter turn.

1/4 *is* a quarter turn, and exactly a quarter turn, under every alternative
being discussed (binary fp, decimal fp, rationals).  The only time it isn't
is under Python's current rules.  So the Alice users will (presumably) be
happy with any change whatsoever from the status quo.

They may not be so happy if they do ten 1/10 turns and don't get back to
where they started (which would happen under binary fp, but not decimal fp or
rationals).

Some may even be so unreasonable <wink> as to be unhappy if six 1/6 turns
wasn't a wash (which leaves only rationals as surprise-free).

Paul Dubois wants a way to tag fp literals (see his proposal).  That's
reasonable for his field.  DrScheme's Student levels have a way to tag
literals as inexact too, which allows students to get their toes wet with
binary fp while keeping their gonads on dry land.  Most people can't ride
rationals forever, but they're great training wheels; thoroughly adequate for
dollars-and-cents computations (the denominators don't grow when they're all
the same, so $1.23 computations don't "blow up" in time or space); and a
darned useful tool for dead-serious numeric grownups in sticky numerical
situations (rationals are immune to all of overflow, underflow, roundoff
error, and catastrophic cancellation, when sticking to + - * /).

Given that Python can't be maximally friendly to everyone here, and has a
large base of binary fp users I don't hate at all <wink>, the best I can
dream up is:

    1.3    binary fp, just like now

    1.3_r  exact rational (a tagged fp literal)

    1/3    exact rational

    1./3   binary fp

So, yes, 1.0 and 1 are different beasts to me:  the "." alone and without an
"_r" tag says "I'm an approximation, and approximations are contagious:
inexact in, inexact out".

Note that the only case where this changes the meaning of existing code is

    1/3

But that has to change anyway lest the Alice users stay stuck at 0 forever.

> You know where I'm leaning...  I don't know that newbies are genuinely
> hurt by FP.

They certainly are burned by binary FP if they go on to do any numeric
programming.  The junior high school textbook formula for solving a quadratic
equation is numerically unstable.  Ditto the high school textbook formula for
computing variance.  Etc.  They're *surrounded* by deep pits; but they don't
need to be, except for the lack of *some* way to spell a newbie-friendly
arithmetic type.

> If we do it right, the naive ones will try 11.0/10.0, see
> that it prints 1.1, and be happy;

Cool.  I make a point of never looking at my chest x-rays either <0.9 wink>.

> the persistent ones will try 1.1**2-1.21, ask for an explanation, and
> get a introduction to floating point.  This *doesnt'* have to explain all
> the details, just the two facts that you can lose precision and that 1.1
> isn't representable exactly in binary.

Which leaves them where?  Uncertain & confused (as you say, they *don't* know
all the details, or indeed really any of them -- they just know "things go
wrong", without any handle on predicting the extent of the problems, let
alone any way of controlling them), and without an alternative they *can*
feel confident about (short of sticking to integers, which may well be the
most frequent advice they get on c.l.py).  What kind of way is that to treat
a poor newbie?

I'll close w/ Kahan again:

    Q. Besides its massive size, what distinguishes today’s market for
       floating-point arithmetic from yesteryears’ ?

    A. Innocence
       (if not inexperience, naïveté, ignorance, misconception,
        superstition, … )

non-extended-binary-fp-is-an-expert's-tool-ly y'rs  - tim



From bckfnn@worldonline.dk  Wed Mar 14 11:48:51 2001
From: bckfnn@worldonline.dk (Finn Bock)
Date: Wed, 14 Mar 2001 11:48:51 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15022.34452.183052.362184@anthem.wooz.org>
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org> <3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org>
Message-ID: <3aaf5a78.8312542@smtp.worldonline.dk>

>>>>>> "FB" == Finn Bock <bckfnn@worldonline.dk> writes:
>
>    | - and as keyword argument names in arglist
>
>I think this last one doesn't work:

[Barry]

>-------------------- snip snip --------------------
>Jython 2.0 on java1.3.0 (JIT: jitc)
>Type "copyright", "credits" or "license" for more information.
>>>> def foo(class=None): pass
>Traceback (innermost last):
>  (no code object) at line 0
>  File "<console>", line 1
>	def foo(class=None): pass
>	        ^
>SyntaxError: invalid syntax
>>>> def foo(print=None): pass
>Traceback (innermost last):
>  (no code object) at line 0
>  File "<console>", line 1
>	def foo(print=None): pass
>	        ^
>SyntaxError: invalid syntax
>-------------------- snip snip --------------------

You are trying to use it in the grammer production "varargslist". It
doesn't work there. It only works in the grammer production "arglist".

The distinction is a good example of how jython tries to make it
possible to use reserved words defined in external code, but does not
try to allow the use of reserved words everywhere.

regards,
finn


From bckfnn@worldonline.dk  Wed Mar 14 11:49:54 2001
From: bckfnn@worldonline.dk (Finn Bock)
Date: Wed, 14 Mar 2001 11:49:54 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <000b01c0ac1d$ad79bec0$e46940d5@hagrid>
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid>
Message-ID: <3aaf5aa5.8357597@smtp.worldonline.dk>

>barry wrote:
>>
>>    | - and as keyword argument names in arglist
>>
>> I think this last one doesn't work:
>> 
>> -------------------- snip snip --------------------
>> Jython 2.0 on java1.3.0 (JIT: jitc)
>> Type "copyright", "credits" or "license" for more information.
>> >>> def foo(class=None): pass
>> Traceback (innermost last):
>>   (no code object) at line 0
>>   File "<console>", line 1
>> def foo(class=None): pass
>>         ^
>> SyntaxError: invalid syntax
>> >>> def foo(print=None): pass
>> Traceback (innermost last):
>>   (no code object) at line 0
>>   File "<console>", line 1
>> def foo(print=None): pass
>>         ^
>> SyntaxError: invalid syntax
>> -------------------- snip snip --------------------

[/F]

>>>> def spam(**kw):
>...     print kw
>...
>>>> spam(class=1)
>{'class': 1}
>>>> spam(print=1)
>{'print': 1}

Exactly.

This feature is mainly used by constructors for java object where
keywords becomes bean property assignments.

  b = JButton(text="Press Me", enabled=1, size=(30, 40))

is a shorthand for

  b = JButton()
  b.setText("Press Me")
  b.setEnabled(1)
  b.setSize(30, 40)

Since the bean property names are outside Jython's control, we allow
AnyName in that position.

regards,
finn


From fredrik@pythonware.com  Wed Mar 14 13:09:51 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 14:09:51 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid> <3aaf5aa5.8357597@smtp.worldonline.dk>
Message-ID: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>

finn wrote:

> >>>> spam(class=1)
> >{'class': 1}
> >>>> spam(print=1)
> >{'print': 1}
> 
> Exactly.

how hard would it be to fix this in CPython?  can it be
done in time for 2.1?  (Thomas?)

Cheers /F



From thomas@xs4all.net  Wed Mar 14 13:58:50 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 14 Mar 2001 14:58:50 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>; from fredrik@pythonware.com on Wed, Mar 14, 2001 at 02:09:51PM +0100
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid> <3aaf5aa5.8357597@smtp.worldonline.dk> <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
Message-ID: <20010314145850.D404@xs4all.nl>

On Wed, Mar 14, 2001 at 02:09:51PM +0100, Fredrik Lundh wrote:
> finn wrote:

> > >>>> spam(class=1)
> > >{'class': 1}
> > >>>> spam(print=1)
> > >{'print': 1}
> > 
> > Exactly.

> how hard would it be to fix this in CPython?  can it be
> done in time for 2.1?  (Thomas?)

Well, monday night my jetlag hit very badly (I flew back on the night from
saturday to sunday) and caused me to skip an entire night of sleep. I spent
part of that breaking my brain over the parser :) I have no experience with
parsers or parser-writing, by the way, so this comes hard to me, and I have
no clue how this is solved in other parsers.

I seriously doubt it can be done for 2.1, unless someone knows parsers well
and can deliver an extended version of the current parser well before the
next beta. Changing the parser to something not so limited as our current
parser would be too big a change to slip in right before 2.1. 

Fixing the current parser is possible, but not straightforward. As far as I
can figure out, the parser first breaks up the file in elements and then
classifies the elements, and if an element cannot be classified, it is left
as bareword for the subsequent passes to catch it as either a valid
identifier in a valid context, or a syntax error.

I guess it should be possible to hack the parser so it accepts other
statements where it expects an identifier, and then treats those statements
as strings, but you can't just accept all statements -- some will be needed
to bracket the identifier, or you get weird behaviour when you say 'def ()'.
So you need to maintain a list of acceptible statements and try each of
those... My guess is that it's possible, I just haven't figured out how to
do it yet. Can we force a certain 'ordering' in the keywords (their symbolic
number as #defined in graminit.h) some way ?

Another solution would be to do it explicitly in Grammar. I posted an
attempt at that before, but it hurts. It can be done in two ways, both of
which hurt for different reasons :) For example,

funcdef: 'def' NAME parameters ':' suite

can be changed in

funcdef: 'def' nameorkw parameters ':' suite
nameorkw: NAME | 'def' | 'and' | 'pass' | 'print' | 'return' | ...

or in

funcdef: 'def' (NAME | 'def' | 'and' | 'pass' | 'print' | ...) parameters ':' suite

The first means changing the places that currently accept a NAME, and that
means that all places where the compiler does STR(node) have to be checked.
There is a *lot* of those, and it isn't directly obvious whether they expect
node to be a NAME, or really know that, or think they know that. STR() could
be made to detect 'nameorkw' nodetypes and get the STR() of its first child
if so, but that's really an ugly hack.

The second way is even more of an ugly hack, but it doesn't require any
changes in the parser. It just requires making the Grammar look like random
garbage :) Of course, we could keep the grammar the way it is, and
preprocess it before feeding it to the parser, extracting all keywords
dynamically and sneakily replacing NAME with (NAME | keywords )... hmm...
that might actually be workable. It would still be a hack, though.

Now-for-something-easy--meetings!-ly y'rs ;)
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fredrik@pythonware.com  Wed Mar 14 14:03:21 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 15:03:21 +0100
Subject: [Python-Dev] OT: careful with that perl code
Message-ID: <011601c0ac8f$8cb66b80$0900a8c0@SPIFF>

http://slashdot.org/article.pl?sid=01/03/13/208259&mode=nocomment

    "because he wasn't familiar with the distinction between perl's
    scalar and list context, S. now has a police record"



From jeremy@alum.mit.edu  Wed Mar 14 14:25:49 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 14 Mar 2001 09:25:49 -0500 (EST)
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
References: <20010312220425.T404@xs4all.nl>
 <200103122332.SAA22948@cj20424-a.reston1.va.home.com>
 <15021.24645.357064.856281@anthem.wooz.org>
 <3aae83f7.41314216@smtp.worldonline.dk>
 <15022.34452.183052.362184@anthem.wooz.org>
 <000b01c0ac1d$ad79bec0$e46940d5@hagrid>
 <3aaf5aa5.8357597@smtp.worldonline.dk>
 <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
Message-ID: <15023.32621.173685.834783@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "FL" == Fredrik Lundh <fredrik@pythonware.com> writes:

  FL> finn wrote:
  >> >>>> spam(class=1)
  >> >{'class': 1}
  >> >>>> spam(print=1)
  >> >{'print': 1}
  >>
  >> Exactly.

  FL> how hard would it be to fix this in CPython?  can it be done in
  FL> time for 2.1?  (Thomas?)

Only if he can use the time machine to slip it in before 2.1b1.

Jeremy


From gmcm@hypernet.com  Wed Mar 14 15:08:16 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Wed, 14 Mar 2001 10:08:16 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.50867.210827.597710@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE9E3F.9635.7CE46C9C@localhost>
Message-ID: <3AAF4310.26204.7F683B24@localhost>

[Jeremy]
> >>>>> "GMcM" == Gordon McMillan <gmcm@hypernet.com> writes:
> 
>   >> Is this the semantic difference between Stackless and
>   CPython >> that people are getting all in a lather about?
> 
>   GMcM> What semantic difference? You can't transfer control to a
>   GMcM> coroutine / urthread in a magic method in CPython, either
>   GMcM> <wink>.
> 
> If I have a library or class that uses threads under the covers,
> I can create the threads in whatever code block I want,
> regardless of what is on the call stack above the block.  The
> reason that coroutines / uthreads are different is that the
> semantics of control transfers are tied to what the call stack
> looks like a) when the thread is created and b) when a control
> transfer is attempted.

Just b) I think.
 
> This restriction seems quite at odds with modularity.  (Could I
> import a module that creates a thread within an __init__ method?)
>  The correctness of a library or class depends on the entire call
> chain involved in its use.

Coroutines are not threads, nor are uthreads. Threads are 
used for comparison purposes because for most people, they 
are the only model for transfers of control outside regular call / 
return. My first serious programming language was IBM 
assembler which, at the time, did not have call / return. That 
was one of about 5 common patterns used. So I don't suffer 
from the illusion that call / return is the only way to do things.

In some ways threads make a lousy model for what's going 
on. They are OS level things. If you were able, on your first 
introduction to threads, to immediately fit them into your 
concept of "modularity", then you are truly unique. They are 
antithetical to my notion of modularity.

If you have another model outside threads and call / return, 
trot it out. It's sure to be a fresher horse than this one.
 
> It's not at all modular, because a programmer could make a local
> decision about organizing a particular module and cause errors in
> a module that don't even use directly.  This would occur if
> module A uses uthreads, module B is a client of module A, and the
> user writes a program that uses module B.  He unsuspectingly adds
> a call to module A in an __init__ method and *boom*.

You will find this enormously more difficult to demonstrate 
than assert. Module A does something in the background. 
Therefor module B does something in the background. There 
is no technique for backgrounding processing which does not 
have some implications for the user of module B. If modules A 
and or B are poorly coded, it will have obvious implications for 
the user.

> "Python is a language in which the use of uthreads in a module
> you didn't know existed can render your own program unusable." 
> <wink>

Your arguments are all based on rather fantastical notions of 
evil module writers pulling dirty tricks on clueless innocent 
programmers. In fact, they're based on the idea that the 
programmer was successfully using module AA, then 
switched to using A (which must have been advertised as a 
drop in replacement) and then found that they went "boom" in 
an __init__ method that used to work. Python today has no 
shortage of ways in which evil module writers can cause 
misery for programmers. Stackless does not claim that 
module writers claiming full compatiblity are telling the truth. If 
module A does not suit your needs, go back to module AA.

Obviously, those of us who like Stackless would be delighted 
to have all interpreter recursions removed. It's also obvious 
where your rhetorical argument is headed: Stackless is 
dangerous unless all interpreter recursions are eliminated; it's 
too much work to remove all interpreter recursions until Py4K; 
please reassign this PEP a nineteen digit number.

and-there-is-NO-truth-to-the-rumor-that-stackless-users
-eat-human-flesh-<munch, munch>-ly y'rs

- Gordon


From tismer@tismer.com  Wed Mar 14 15:23:38 2001
From: tismer@tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 16:23:38 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <3AAE9E3F.9635.7CE46C9C@localhost> <3AAF4310.26204.7F683B24@localhost>
Message-ID: <3AAF8CFA.58A9A68B@tismer.com>


Gordon McMillan wrote:
> 
> [Jeremy]

<big snip/>

> Obviously, those of us who like Stackless would be delighted
> to have all interpreter recursions removed. It's also obvious
> where your rhetorical argument is headed: Stackless is
> dangerous unless all interpreter recursions are eliminated; it's
> too much work to remove all interpreter recursions until Py4K;
> please reassign this PEP a nineteen digit number.

Of course we would like to see all recursions vanish.
Unfortunately this would make Python's current codebase
vanish almost completely, too, which would be bad. :)

That's the reason to have Stack Lite.

The funny observation after following this thread:
It appears that Stack Lite is in fact best suited for
Microthreads, better than for coroutines.

Reason: Microthreads schedule automatically, when it is allowed.
By normal use, it gives you no trouble to spawn an uthread
from any extension, since the scheduling is done by the
interpreter in charge only if it is active, after all nested
calls have been done.

Hence, Stack Lite gives us *all* of uthreads, and almost all of
generators and coroutines, except for the mentioned cases.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From guido@digicool.com  Wed Mar 14 15:26:23 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 10:26:23 -0500
Subject: [Python-Dev] Kinds
In-Reply-To: Your message of "Tue, 13 Mar 2001 08:38:35 PST."
 <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com>
References: <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com>
Message-ID: <200103141526.KAA04151@cj20424-a.reston1.va.home.com>

I liked Paul's brief explanation of Kinds.  Maybe we could make it so
that there's a special Kind representing bignums, and eventually that
could become the default (as part of the int unification).  Then
everybody can have it their way.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Wed Mar 14 15:33:50 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 10:33:50 -0500
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: Your message of "Tue, 13 Mar 2001 19:08:05 +0100."
 <20010313190805.C404@xs4all.nl>
References: <E14ciAp-0005dJ-00@darjeeling> <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>
 <20010313190805.C404@xs4all.nl>
Message-ID: <200103141533.KAA04216@cj20424-a.reston1.va.home.com>

> I think the main reason for
> separate lists is to allow non-python-dev-ers easy access to the lists. 

Yes, this is the main reason.

I like it, it keeps my inbox separated out.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From Samuele Pedroni <pedroni@inf.ethz.ch>  Wed Mar 14 15:41:03 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Wed, 14 Mar 2001 16:41:03 +0100 (MET)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
Message-ID: <200103141541.QAA03543@core.inf.ethz.ch>

Hi.

First of all I should admit I ignore what have been discussed
at IPC9 about Stackless Python.

My plain question (as jython developer): is there a real intention
to make python stackless in the short term (2.2, 2.3...)=0D?

AFAIK then for jython there are three option:
1 - Just don't care
2 - A major rewrite with performance issues (but AFAIK nobody has
  the resources for doing that)
3 - try to implement some of the highlevel offered features through threads
   (which could be pointless from a performance point of view:
     e.g. microthreads trough threads, not that nice).
    =20
The option are 3 just for the theoretical sake of compatibility=20
(I don't see the point to port python stackless based code to jython)
 or 1 plus some amount of frustration <wink>. Am I missing something?

The problem will be more serious if the std lib will begin to use
heavily the stackless features.


regards, Samuele Pedroni.



From barry@digicool.com  Wed Mar 14 16:06:57 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Wed, 14 Mar 2001 11:06:57 -0500
Subject: [Python-Dev] OT: careful with that perl code
References: <011601c0ac8f$8cb66b80$0900a8c0@SPIFF>
Message-ID: <15023.38689.298294.736516@anthem.wooz.org>

>>>>> "FL" == Fredrik Lundh <fredrik@pythonware.com> writes:

    FL> http://slashdot.org/article.pl?sid=01/03/13/208259&mode=nocomment

    FL>     "because he wasn't familiar with the distinction between
    FL> perl's scalar and list context, S. now has a police record"

If it's true, I don't know what about that article scares / depresses me more.

born-in-the-usa-ly y'rs,
-Barry


From aycock@csc.UVic.CA  Wed Mar 14 18:02:43 2001
From: aycock@csc.UVic.CA (John Aycock)
Date: Wed, 14 Mar 2001 10:02:43 -0800
Subject: [Python-Dev] CML2 compiler speedup
Message-ID: <200103141802.KAA02907@valdes.csc.UVic.CA>

| talking about performance, has anyone played with using SRE's
| lastindex/lastgroup stuff with SPARK?

Not yet.  I will defer to Tim's informed opinion on this.

| (is there anything else I could do in SRE to make SPARK run faster?)

Well, if I'm wishing..  :-)

I would like all the parts of an alternation A|B|C to be searched for
at the same time (my assumption is that they aren't currently).  And
I'd also love a flag that would disable "first then longest" semantics
in favor of always taking the longest match.

John


From thomas@xs4all.net  Wed Mar 14 18:36:17 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 14 Mar 2001 19:36:17 +0100
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <200103141802.KAA02907@valdes.csc.UVic.CA>; from aycock@csc.UVic.CA on Wed, Mar 14, 2001 at 10:02:43AM -0800
References: <200103141802.KAA02907@valdes.csc.UVic.CA>
Message-ID: <20010314193617.F404@xs4all.nl>

On Wed, Mar 14, 2001 at 10:02:43AM -0800, John Aycock wrote:

> I would like all the parts of an alternation A|B|C to be searched for
> at the same time (my assumption is that they aren't currently).  And
> I'd also love a flag that would disable "first then longest" semantics
> in favor of always taking the longest match.

While on that subject.... Is there an easy way to get all the occurances of
a repeating group ? I wanted to do something like 'foo(bar|baz)+' and be
able to retrieve all matches of the group. I fixed it differently now, but I
kept wondering why that wasn't possible.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From esr@golux.thyrsus.com  Tue Mar 13 22:17:42 2001
From: esr@golux.thyrsus.com (Eric)
Date: Tue, 13 Mar 2001 14:17:42 -0800
Subject: [Python-Dev] freeze is broken in 2.x
Message-ID: <E14cx6s-0002zN-00@golux.thyrsus.com>

It appears that the freeze tools are completely broken in 2.x.  This 
is rather unfortunate, as I was hoping to use them to end-run some
objections to CML2 and thereby get python into the Linux kernel tree.

I have fixed some obvious errors (use of the deprecated 'cmp' module;
use of regex) but I have encountered run-time errors that are beyond
my competence to fix.  From a cursory inspection of the code it looks
to me like the freeze tools need adaptation to the new
distutils-centric build process.

Do these tools have a maintainer?  They need some serious work.
--
							>>esr>>


From thomas.heller@ion-tof.com  Wed Mar 14 21:23:39 2001
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Wed, 14 Mar 2001 22:23:39 +0100
Subject: [Python-Dev] freeze is broken in 2.x
References: <E14cx6s-0002zN-00@golux.thyrsus.com>
Message-ID: <05fd01c0accd$0a1dc450$e000a8c0@thomasnotebook>

> It appears that the freeze tools are completely broken in 2.x.  This 
> is rather unfortunate, as I was hoping to use them to end-run some
> objections to CML2 and thereby get python into the Linux kernel tree.
> 
> I have fixed some obvious errors (use of the deprecated 'cmp' module;
> use of regex) but I have encountered run-time errors that are beyond
> my competence to fix.  From a cursory inspection of the code it looks
> to me like the freeze tools need adaptation to the new
> distutils-centric build process.

I have some ideas about merging freeze into distutils, but this is
nothing which could be implemented for 2.1.

> 
> Do these tools have a maintainer?  They need some serious work.

At least they seem to have users.

Thomas



From esr@thyrsus.com  Wed Mar 14 21:37:10 2001
From: esr@thyrsus.com (Eric)
Date: Wed, 14 Mar 2001 13:37:10 -0800
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>; from tim.one@home.com on Wed, Mar 14, 2001 at 02:27:21AM -0500
References: <200103131532.f2DFWpw04691@snark.thyrsus.com> <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>
Message-ID: <20010314133710.J2046@thyrsus.com>

Tim Peters <tim.one@home.com>:
> If all you got out of crafting a one-grammar parser by hand is a measly
> factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> parser generators for restricted grammars, in C).  For the all-purpose Earley
> parser to get that close is really quite an accomplishment!  SPARK was
> written primarily for rapid prototyping, at which it excels (how many times
> did you change your grammar during development?  how much longer would it
> have taken you to adjust had you needed to rework your RD parser each time?).

SPARK is indeed a wonderful prototyping tool, and I admire John Aycock for
producing it (though he really needs to do better on the documentation).

Unfortunately, Michael Elizabeth Chastain pointed out that it imposes a
bad startup delay in some important cases of CML2 usage.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Americans have the will to resist because you have weapons. 
If you don't have a gun, freedom of speech has no power.
         -- Yoshimi Ishikawa, Japanese author, in the LA Times 15 Oct 1992


From esr@thyrsus.com  Wed Mar 14 21:38:14 2001
From: esr@thyrsus.com (Eric)
Date: Wed, 14 Mar 2001 13:38:14 -0800
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <014401c0ac60$4f0b1c60$e46940d5@hagrid>; from fredrik@pythonware.com on Wed, Mar 14, 2001 at 09:25:19AM +0100
References: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com> <014401c0ac60$4f0b1c60$e46940d5@hagrid>
Message-ID: <20010314133814.K2046@thyrsus.com>

Fredrik Lundh <fredrik@pythonware.com>:
> tim wrote:
> > If all you got out of crafting a one-grammar parser by hand is a measly
> > factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> > parser generators for restricted grammars, in C).
> 
> talking about performance, has anyone played with using SRE's
> lastindex/lastgroup stuff with SPARK?
> 
> (is there anything else I could do in SRE to make SPARK run faster?)

Wouldn't help me, I wasn't using the SPARK scanner.  The overhead really
was in the parsing.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Gun Control: The theory that a woman found dead in an alley, raped and
strangled with her panty hose, is somehow morally superior to a
woman explaining to police how her attacker got that fatal bullet wound.
	-- L. Neil Smith


From guido@digicool.com  Wed Mar 14 23:05:50 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 18:05:50 -0500
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: Your message of "Tue, 13 Mar 2001 14:17:42 PST."
 <E14cx6s-0002zN-00@golux.thyrsus.com>
References: <E14cx6s-0002zN-00@golux.thyrsus.com>
Message-ID: <200103142305.SAA05872@cj20424-a.reston1.va.home.com>

> It appears that the freeze tools are completely broken in 2.x.  This 
> is rather unfortunate, as I was hoping to use them to end-run some
> objections to CML2 and thereby get python into the Linux kernel tree.
> 
> I have fixed some obvious errors (use of the deprecated 'cmp' module;
> use of regex) but I have encountered run-time errors that are beyond
> my competence to fix.  From a cursory inspection of the code it looks
> to me like the freeze tools need adaptation to the new
> distutils-centric build process.
> 
> Do these tools have a maintainer?  They need some serious work.

The last maintainers were me and Mark Hammond, but neither of us has
time to look into this right now.  (At least I know I don't.)

What kind of errors do you encounter?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From tim.one@home.com  Thu Mar 15 00:28:15 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 14 Mar 2001 19:28:15 -0500
Subject: [Python-Dev] 2.1b2 next Friday?
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGJFAA.tim.one@home.com>

We need another beta release (according to me).  Anyone disagree?

If not, let's pump it out next Friday, 23-Mar-2001.  That leaves 3 weeks for
intense final testing before 2.1 final (which PEP 226 has scheduled for
13-Apr-2001).



From greg@cosc.canterbury.ac.nz  Thu Mar 15 00:31:00 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 13:31:00 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAF3C45.1972981F@tismer.com>
Message-ID: <200103150031.NAA05310@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer@tismer.com>:

> You can *create* a thread using a callback.

Okay, that's not so bad. (An earlier message seemed to
be saying that you couldn't even do that.)

But what about GUIs such as Tkinter which have a
main loop in C that keeps control for the life of
the program? You'll never get back to the base-level
interpreter, not even between callbacks, so how do 
the uthreads get scheduled?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Mar 15 00:47:12 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 13:47:12 +1300 (NZDT)
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEJIJFAA.tim.one@home.com>
Message-ID: <200103150047.NAA05314@s454.cosc.canterbury.ac.nz>

Maybe Python should use decimal FP as the *default* representation
for fractional numbers, with binary FP available as an option for
those who really want it.

Unadorned FP literals would give you decimal FP, as would float().
There would be another syntax for binary FP literals (e.g. a 'b'
suffix) and a bfloat() function.

My first thought was that binary FP literals should have to be
written in hex or octal. ("You want binary FP? Then you can jolly
well learn to THINK in it!") But that might be a little extreme.

By the way, what if CPU designers started providing decimal FP 
in hardware? Could scientists and ordinary mortals then share the
same FP system and be happe? The only disadvantage I can think of 
for the scientists is that a bit more memory would be required, but
memory is cheap nowadays. Are there any other drawbacks that
I haven't thought of?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim.one@home.com  Thu Mar 15 02:01:50 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 14 Mar 2001 21:01:50 -0500
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <200103150047.NAA05314@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMIJFAA.tim.one@home.com>

[Greg Ewing]
> Maybe Python should use decimal FP as the *default* representation
> for fractional numbers, with binary FP available as an option for
> those who really want it.

NumPy users would scream bloody murder.

> Unadorned FP literals would give you decimal FP, as would float().
> There would be another syntax for binary FP literals (e.g. a 'b'
> suffix) and a bfloat() function.

Ditto.

> My first thought was that binary FP literals should have to be
> written in hex or octal. ("You want binary FP? Then you can jolly
> well learn to THINK in it!") But that might be a little extreme.

"A little"?  Yes <wink>.  Note that C99 introduces hex fp notation, though,
as it's the only way to be sure you're getting the bits you need (when it
really matters, as it can, e.g., in accurate implementations of math
libraries).

> By the way, what if CPU designers started providing decimal FP
> in hardware? Could scientists and ordinary mortals then share the
> same FP system and be happe?

Sure!  Countless happy users of scientific calculators are evidence of
that -- virtually all calculators use decimal fp, for the obvious human
factors reasons ("obvious", I guess, to everyone except most post-1960's
language designers <wink>).

> The only disadvantage I can think of for the scientists is that a
> bit more memory would be required, but memory is cheap nowadays. Are
> there any other drawbacks that I haven't thought of?

See the Kahan paper I referenced yesterday (also the FAQ mentioned below).
He discusses it briefly.  Base 10 HW fp has small additional speed costs, and
makes error analysis a bit harder (at the boundaries where an exponent goes
up, the gaps between representable fp numbers are larger the larger the
base -- in a sense, e.g., whenever a decimal fp number ends with 5, it's
"wasting" a couple bits of potential precision; in that sense, binary fp is
provably optimal).


Mike Cowlishaw (REXX's father) is currently working hard in this area:

    http://www2.hursley.ibm.com/decimal/

That's an excellent resource for people curious about decimal fp.

REXX has many users in financial and commerical fields, where binary fp is a
nightmare to live with (BTW, REXX does use decimal fp).  An IBM study
referenced in the FAQ found that less than 2% of the numeric fields in
commercial databases contained data of a binary float type; more than half
used the database's form of decimal fp; the rest were of integer types.  It's
reasonable to speculate that much of the binary fp data was being used simply
because it was outside the dynamic range of the database's decimal fp type --
in which case even the tiny "< 2%" is an overstatement.

Maybe 5 years ago I asked Cowlishaw whether Python could "borrow" REXX's
software decimal fp routines.  He said sure.  Ironically, I had more time to
pursue it then than I have now ...

less-than-zero-in-an-unsigned-type-ly y'rs  - tim



From greg@cosc.canterbury.ac.nz  Thu Mar 15 04:02:24 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 17:02:24 +1300 (NZDT)
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEMIJFAA.tim.one@home.com>
Message-ID: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz>

Tim Peters <tim.one@home.com>:

> NumPy users would scream bloody murder.

It would probably be okay for NumPy to use binary FP by default.
If you're using NumPy, you're probably a scientist or mathematician
already and are aware of the issues.

The same goes for any other extension module designed for
specialist uses, e.g. 3D graphics.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+



From aahz@panix.com  Thu Mar 15 06:14:54 2001
From: aahz@panix.com (aahz@panix.com)
Date: Thu, 15 Mar 2001 01:14:54 -0500 (EST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
Message-ID: <200103150614.BAA04221@panix6.panix.com>

[posted to c.l.py.announce and c.l.py; followups to c.l.py; cc'd to
python-dev]

Okay, folks, here it is, the first draft of the spec for creating Python
maintenance releases.  Note that I'm not on python-dev, so it's probably
better to have the discussion on c.l.py if possible.

            PEP: 6
          Title: Patch and Bug Fix Releases
        Version: $Revision: 1.1 $
         Author: aahz@pobox.com (Aahz)
         Status: Draft
           Type: Informational
        Created: 15-Mar-2001
   Post-History:
     _________________________________________________________________
   
  Abstract
  
    Python has historically had only a single fork of development,
    with releases having the combined purpose of adding new features
    and delivering bug fixes (these kinds of releases will be referred
    to as "feature releases").  This PEP describes how to fork off
    patch releases of old versions for the primary purpose of fixing
    bugs.

    This PEP is not, repeat NOT, a guarantee of the existence of patch
    releases; it only specifies a procedure to be followed if patch
    releases are desired by enough of the Python community willing to
    do the work.


  Motivation
  
    With the move to SourceForge, Python development has accelerated.
    There is a sentiment among part of the community that there was
    too much acceleration, and many people are uncomfortable with
    upgrading to new versions to get bug fixes when so many features
    have been added, sometimes late in the development cycle.

    One solution for this issue is to maintain old feature releases,
    providing bug fixes and (minimal!) feature additions.  This will
    make Python more attractive for enterprise development, where
    Python may need to be installed on hundreds or thousands of
    machines.

    At the same time, many of the core Python developers are
    understandably reluctant to devote a significant fraction of their
    time and energy to what they perceive as grunt work.  On the
    gripping hand, people are likely to feel discomfort around
    installing releases that are not certified by PythonLabs.


  Prohibitions
  
    Patch releases are required to adhere to the following
    restrictions:

    1. There must be zero syntax changes.  All .pyc and .pyo files
       must work (no regeneration needed) with all patch releases
       forked off from a feature release.

    2. There must be no incompatible C API changes.  All extensions
       must continue to work without recompiling in all patch releases
       in the same fork as a feature release.


  Bug Fix Releases
  
    Bug fix releases are a subset of all patch releases; it is
    prohibited to add any features to the core in a bug fix release.
    A patch release that is not a bug fix release may contain minor
    feature enhancements, subject to the Prohibitions section.

    The standard for patches to extensions and modules is a bit more
    lenient, to account for the possible desirability of including a
    module from a future version that contains mostly bug fixes but
    may also have some small feature changes.  (E.g. Fredrik Lundh
    making available the 2.1 sre module for 2.0 and 1.5.2.)


  Version Numbers
  
    Starting with Python 2.0, all feature releases are required to
    have the form X.Y; patch releases will always be of the form
    X.Y.Z.  To clarify the distinction between a bug fix release and a
    patch release, all non-bug fix patch releases will have the suffix
    "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
    bug fix release; and "2.1.2p" is a patch release that contains
    minor feature enhancements.


  Procedure
  
    XXX This section is still a little light (and probably
    controversial!)

    The Patch Czar is the counterpart to the BDFL for patch releases.
    However, the BDFL and designated appointees retain veto power over
    individual patches and the decision of whether to label a patch
    release as a bug fix release.

    As individual patches get contributed to the feature release fork,
    each patch contributor is requested to consider whether the patch
    is a bug fix suitable for inclusion in a patch release.  If the
    patch is considered suitable, the patch contributor will mail the
    SourceForge patch (bug fix?) number to the maintainers' mailing
    list.

    In addition, anyone from the Python community is free to suggest
    patches for inclusion.  Patches may be submitted specifically for
    patch releases; they should follow the guidelines in PEP 3[1].

    The Patch Czar decides when there are a sufficient number of
    patches to warrant a release.  The release gets packaged up,
    including a Windows installer, and made public as a beta release.
    If any new bugs are found, they must be fixed and a new beta
    release publicized.  Once a beta cycle completes with no new bugs
    found, the package is sent to PythonLabs for certification and
    publication on python.org.

    Each beta cycle must last a minimum of one month.


  Issues To Be Resolved
  
    Should the first patch release following any feature release be
    required to be a bug fix release?  (Aahz proposes "yes".)

    Is it allowed to do multiple forks (e.g. is it permitted to have
    both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)

    Does it makes sense for a bug fix release to follow a patch
    release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)

    Exactly how does a candidate patch release get submitted to
    PythonLabs for certification?  And what does "certification" mean,
    anyway?  ;-)

    Who is the Patch Czar?  Is the Patch Czar a single person?  (Aahz
    says "not me alone".  Aahz is willing to do a lot of the
    non-technical work, but Aahz is not a C programmer.)

    What is the equivalent of python-dev for people who are
    responsible for maintaining Python?  (Aahz proposes either
    python-patch or python-maint, hosted at either python.org or
    xs4all.net.)

    Does SourceForge make it possible to maintain both separate and
    combined bug lists for multiple forks?  If not, how do we mark
    bugs fixed in different forks?  (Simplest is to simply generate a
    new bug for each fork that it gets fixed in, referring back to the
    main bug number for details.)


  References
  
    [1] PEP 3, Hylton, http://python.sourceforge.net/peps/pep-0003.html


  Copyright
  
    This document has been placed in the public domain.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"The overexamined life sure is boring."  --Loyal Mini Onion


From tismer@tismer.com  Thu Mar 15 11:30:09 2001
From: tismer@tismer.com (Christian Tismer)
Date: Thu, 15 Mar 2001 12:30:09 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103141541.QAA03543@core.inf.ethz.ch>
Message-ID: <3AB0A7C1.B86E63F2@tismer.com>


Samuele Pedroni wrote:
> 
> Hi.
> 
> First of all I should admit I ignore what have been discussed
> at IPC9 about Stackless Python.

This would have answered your question.

> My plain question (as jython developer): is there a real intention
> to make python stackless in the short term (2.2, 2.3...)

Yes.

> AFAIK then for jython there are three option:
> 1 - Just don't care
> 2 - A major rewrite with performance issues (but AFAIK nobody has
>   the resources for doing that)
> 3 - try to implement some of the highlevel offered features through threads
>    (which could be pointless from a performance point of view:
>      e.g. microthreads trough threads, not that nice).
> 
> The option are 3 just for the theoretical sake of compatibility
> (I don't see the point to port python stackless based code to jython)
>  or 1 plus some amount of frustration <wink>. Am I missing something?
> 
> The problem will be more serious if the std lib will begin to use
> heavily the stackless features.

Option 1 would be even fine with me. I would make all
Stackless features optional, not enforcing them for the
language.

Option 2 doesn't look reasonable. We cannot switch
microthreads without changing the VM. In CPython,
the VM is available, in Jython it is immutable.
The only way I would see is to turn Jython into
an interpreter instead of producing VM code. That
would do, but at an immense performance cost.

Option 3 is Guido's view of a compatibility layer.
Microthreads can be simulated by threads in fact.
This is slow, but compatible, making stuff just work.
Most probably this version is performing better than
option 2.

I don't believe that the library will become a problem,
if modifications are made with Jython in mind.

Personally, I'm not convinced that any of these will make
Jython users happy. The concurrency domain will in
fact be dominated by CPython, since one of the best
features of Uthreads is incredible speed and small size.
But this is similar to a couple of extensions for CPython
which are just not available for Jython.

I tried hard to find out how to make Jython Stackless.
There was no way yet, I'm very very sorry!
On the other hand I don't think
that Jython should play the showstopper for a technology
that people really want. Including the stackless machinery
into Python without enforcing it would be my way.
Parallel stuff can sit in an extension module.
Of course there will be a split of modules which don't
work in Jython, or which are less efficient in Jython.
But if efficiency is the demand, Jython wouldn't be
the right choice, anyway.

regards - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From guido@digicool.com  Thu Mar 15 11:55:56 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 06:55:56 -0500
Subject: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Thu, 15 Mar 2001 17:02:24 +1300."
 <200103150402.RAA05333@s454.cosc.canterbury.ac.nz>
References: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz>
Message-ID: <200103151155.GAA07429@cj20424-a.reston1.va.home.com>

I'll say one thing and then I'll try to keep my peace about this.

I think that using rationals as the default type for
decimal-with-floating-point notation won't fly.  There are too many
issues, e.g. performance, rounding on display, usability for advanced
users, backwards compatibility.  This means that it just isn't
possible to get a consensus about moving in this direction.

Using decimal floating point won't fly either, for mostly the same
reasons, plus the implementation appears to be riddled with gotcha's
(at least rationals are relatively clean and easy to implement, given
that we already have bignums).

I don't think I have the time or energy to argue this much further --
someone will have to argue until they have a solution that the various
groups (educators, scientists, and programmers) can agree on.  Maybe
language levels will save the world?

That leaves three topics as potential low-hanging fruit:

- Integer unification (PEP 237).  It's mostly agreed that plain ints
  and long ints should be unified.  Simply creating a long where we
  currently overflow would be the easiest route; it has some problems
  (it's not 100% seamless) but I think it's usable and I see no real
  disadvantages.

- Number unification.  This is more controversial, but I believe less
  so than rationals or decimal f.p.  It would remove all semantic
  differences between "1" and "1.0", and therefore 1/2 would return
  0.5.  The latter is separately discussed in PEP 238, but I now
  believe this should only be done as part of a general unification.
  Given my position on decimal f.p. and rationals, this would mean an
  approximate, binary f.p. result for 1/3, and this does not seem to
  have the support of the educators (e.g. Jeff Elkner is strongly
  opposed to teaching floats at all).  But other educators (e.g. Randy
  Pausch, and the folks who did VPython) strongly recommend this based
  on user observation, so there's hope.  As a programmer, as long as
  there's *some* way to spell integer division (even div(i, j) will
  do), I don't mind.  The breakage of existig code will be great so
  we'll be forced to introduce this gradually using a future_statement
  and warnings.

- "Kinds", as proposed by Paul Dubois.  This doesn't break existing
  code or change existing semantics, it just adds more control for
  those who want it.  I think this might just work.  Will someone
  kindly help Paul get this in PEP form?

PS.  Moshe, please check in your PEPs.  They need to be on-line.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From tismer@tismer.com  Thu Mar 15 12:41:07 2001
From: tismer@tismer.com (Christian Tismer)
Date: Thu, 15 Mar 2001 13:41:07 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103150031.NAA05310@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB0B863.52DFB61C@tismer.com>


Greg Ewing wrote:
> 
> Christian Tismer <tismer@tismer.com>:
> 
> > You can *create* a thread using a callback.
> 
> Okay, that's not so bad. (An earlier message seemed to
> be saying that you couldn't even do that.)
> 
> But what about GUIs such as Tkinter which have a
> main loop in C that keeps control for the life of
> the program? You'll never get back to the base-level
> interpreter, not even between callbacks, so how do
> the uthreads get scheduled?

This would not work. One simple thing I could think of is
to let the GUI live in an OS thread, and have another
thread for all the microthreads.
More difficult but maybe better: A C main loop which
doesn't run an interpreter will block otherwise. But
most probably, it will run interpreters from time to time.
These can be told to take the scheduling role on.
It does not matter on which interpreter level we are,
we just can't switch to frames of other levels. But
even leaving a frame chain, and re-entering later
with a different stack level is no problem.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From paulp@ActiveState.com  Thu Mar 15 13:30:52 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Thu, 15 Mar 2001 05:30:52 -0800
Subject: [Python-Dev] Before it was called Stackless....
Message-ID: <3AB0C40C.54CAA328@ActiveState.com>

http://www.python.org/workshops/1995-05/WIP.html

I found Guido's "todo list" from 1995. 

	Move the C stack out of the way 

It may be possible to implement Python-to-Python function and method
calls without pushing a C stack frame. This has several advantages -- it
could be more efficient, it may be possible to save and restore the
Python stack to enable migrating programs, and it may be possible to
implement multiple threads without OS specific support (the latter is
questionable however, since it would require a solution for all blocking
system calls). 



-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From tim.one@home.com  Thu Mar 15 15:31:57 2001
From: tim.one@home.com (Tim Peters)
Date: Thu, 15 Mar 2001 10:31:57 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103151155.GAA07429@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>

[Guido]
> I'll say one thing and then I'll try to keep my peace about this.

If this was one thing, you're suffering major roundoff error <wink>.

> I think that using rationals as the default type for
> decimal-with-floating-point notation won't fly.  There are too many
> issues, e.g. performance, rounding on display, usability for advanced
> users, backwards compatibility.  This means that it just isn't
> possible to get a consensus about moving in this direction.

Agreed.

> Using decimal floating point won't fly either,

If you again mean "by default", also agreed.

> for mostly the same reasons, plus the implementation appears to
> be riddled with gotcha's

It's exactly as difficult or easy as implementing binary fp in software; see
yesterday's link to Cowlishaw's work for detailed pointers; and as I said
before, Cowlishaw earlier agreed (years ago) to let Python use REXX's
implementation code.

> (at least rationals are relatively clean and easy to implement, given
> that we already have bignums).

Oddly enough, I believe rationals are more code in the end (e.g., my own
Rational package is about 3000 lines of Python, but indeed is so general it
subsumes IEEE 854 (the decimal variant of IEEE 754) except for Infs and
NaNs) -- after you add rounding facilities to Rationals, they're as hairy as
decimal fp.

> I don't think I have the time or energy to argue this much further --
> someone will have to argue until they have a solution that the various
> groups (educators, scientists, and programmers) can agree on.  Maybe
> language levels will save the world?

A per-module directive specifying the default interpretation of fp literals
within the module is an ugly but workable possibility.

> That leaves three topics as potential low-hanging fruit:
>
> - Integer unification (PEP 237).  It's mostly agreed that plain ints
>   and long ints should be unified.  Simply creating a long where we
>   currently overflow would be the easiest route; it has some problems
>   (it's not 100% seamless) but I think it's usable and I see no real
>   disadvantages.

Good!

> - Number unification.  This is more controversial, but I believe less
>   so than rationals or decimal f.p.  It would remove all semantic
>   differences between "1" and "1.0", and therefore 1/2 would return
>   0.5.

The only "number unification" PEP on the table does not remove all semantic
differences:  1.0 is tagged as inexact under Moshe's PEP, but 1 is not.  So
this is some other meaning for unification.  Trying to be clear.

>   The latter is separately discussed in PEP 238, but I now believe
>   this should only be done as part of a general unification.
>   Given my position on decimal f.p. and rationals, this would mean an
>   approximate, binary f.p. result for 1/3, and this does not seem to
>   have the support of the educators (e.g. Jeff Elkner is strongly
>   opposed to teaching floats at all).

I think you'd have a very hard time finding any pre-college level teacher who
wants to teach binary fp.  Your ABC experience is consistent with that too.

>  But other educators (e.g. Randy Pausch, and the folks who did
> VPython) strongly recommend this based on user observation, so there's
> hope.

Alice is a red herring!  What they wanted was for 1/2 *not* to mean 0.  I've
read the papers and dissertations too -- there was no plea for binary fp in
those, just that division not throw away info.  The strongest you can claim
using these projects as evidence is that binary fp would be *adequate* for a
newbie graphics application.  And I'd agree with that.  But graphics is a
small corner of education, and either rationals or decimal fp would also be
adequate for newbie graphics.

>   As a programmer, as long as there's *some* way to spell integer
>   division (even div(i, j) will do), I don't mind.

Yes, I need that too.

>   The breakage of existig code will be great so we'll be forced to
>   introduce this gradually using a future_statement and warnings.
>
> - "Kinds", as proposed by Paul Dubois.  This doesn't break existing
>   code or change existing semantics, it just adds more control for
>   those who want it.  I think this might just work.  Will someone
>   kindly help Paul get this in PEP form?

I will.

> PS.  Moshe, please check in your PEPs.  They need to be on-line.

Absolutely.



From Samuele Pedroni <pedroni@inf.ethz.ch>  Thu Mar 15 15:39:18 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Thu, 15 Mar 2001 16:39:18 +0100 (MET)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
Message-ID: <200103151539.QAA01573@core.inf.ethz.ch>

Hi.

[Christian Tismer]
> Samuele Pedroni wrote:
> > 
> > Hi.
> > 
> > First of all I should admit I ignore what have been discussed
> > at IPC9 about Stackless Python.
> 
> This would have answered your question.
> 
> > My plain question (as jython developer): is there a real intention
> > to make python stackless in the short term (2.2, 2.3...)
> 
> Yes.
Now I know <wink>.

 > > AFAIK then for jython there are three option:
> > 1 - Just don't care
> > 2 - A major rewrite with performance issues (but AFAIK nobody has
> >   the resources for doing that)
> > 3 - try to implement some of the highlevel offered features through threads
> >    (which could be pointless from a performance point of view:
> >      e.g. microthreads trough threads, not that nice).
> > 
> > The option are 3 just for the theoretical sake of compatibility
> > (I don't see the point to port python stackless based code to jython)
> >  or 1 plus some amount of frustration <wink>. Am I missing something?
> > 
> > The problem will be more serious if the std lib will begin to use
> > heavily the stackless features.
> 
> Option 1 would be even fine with me. I would make all
> Stackless features optional, not enforcing them for the
> language.
> Option 2 doesn't look reasonable. We cannot switch
> microthreads without changing the VM. In CPython,
> the VM is available, in Jython it is immutable.
> The only way I would see is to turn Jython into
> an interpreter instead of producing VM code. That
> would do, but at an immense performance cost.
To be honest each python method invocation take such a tour
in jython that maybe the cost would not be that much, but
we will loose the smooth java and jython integration and
the possibility of having jython applets...
so it is a no-go and nobody has time for doing that.

> 
> Option 3 is Guido's view of a compatibility layer.
> Microthreads can be simulated by threads in fact.
> This is slow, but compatible, making stuff just work.
> Most probably this version is performing better than
> option 2.
On the long run that could find a natural solution, at least
wrt to uthreads, java is having some success on the server side,
and there is some ongoing research on writing jvms with their
own scheduled lightweight threads, such that a larger amount
of threads can be handled in a smoother way.

> I don't believe that the library will become a problem,
> if modifications are made with Jython in mind.
I was thinking about stuff like generators used everywhere,
but that is maybe just uninformed panicing. They are the
kind of stuff that make programmers addictive <wink>.

> 
> Personally, I'm not convinced that any of these will make
> Jython users happy. 
If they will not be informed, they just won't care <wink>

> I tried hard to find out how to make Jython Stackless.
> There was no way yet, I'm very very sorry!
You were trying something impossible <wink>,
the smooth integration with java is the big win of jython,
there is no way of making it stackless and preserving that.

> On the other hand I don't think
> that Jython should play the showstopper for a technology
> that people really want. 
Fine for me.

> Including the stackless machinery
> into Python without enforcing it would be my way.
> Parallel stuff can sit in an extension module.
> Of course there will be a split of modules which don't
> work in Jython, or which are less efficient in Jython.
> But if efficiency is the demand, Jython wouldn't be
> the right choice, anyway.
And python without C isn't that either.
All the dynamic optimisation technology behind the jvm make it outperform
the pvm for things light tight loops, etc.
And jython can't exploit any of that, because python is too dynamic,
sometimes even in spurious ways.

In different ways they (java,python,... ) all are good approximations of the
Right Thing without being it, for different reasons.
(just a bit of personal frustration ;))

regards.



From guido@digicool.com  Thu Mar 15 15:42:32 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 10:42:32 -0500
Subject: [Python-Dev] Re: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Thu, 15 Mar 2001 10:31:57 EST."
 <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>
Message-ID: <200103151542.KAA09191@cj20424-a.reston1.va.home.com>

> I think you'd have a very hard time finding any pre-college level teacher who
> wants to teach binary fp.  Your ABC experience is consistent with that too.

"Want to", no.  But whether they're teaching Java, C++, or Pascal,
they have no choice: if they need 0.5, they'll need binary floating
point, whether they explain it adequately or not.  Possibly they are
all staying away from the decimal point completely, but I find that
hard to believe.

> >  But other educators (e.g. Randy Pausch, and the folks who did
> > VPython) strongly recommend this based on user observation, so there's
> > hope.
> 
> Alice is a red herring!  What they wanted was for 1/2 *not* to mean 0.  I've
> read the papers and dissertations too -- there was no plea for binary fp in
> those, just that division not throw away info.

I never said otherwise.  It just boils down to binary fp as the only
realistic choice.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mal@lemburg.com  Thu Mar 15 16:31:34 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Thu, 15 Mar 2001 17:31:34 +0100
Subject: [Python-Dev] Re: WYSIWYG decimal fractions
References: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz> <200103151155.GAA07429@cj20424-a.reston1.va.home.com>
Message-ID: <3AB0EE66.37E6C633@lemburg.com>

Guido van Rossum wrote:
> 
> I'll say one thing and then I'll try to keep my peace about this.
> 
> I think that using rationals as the default type for
> decimal-with-floating-point notation won't fly.  There are too many
> issues, e.g. performance, rounding on display, usability for advanced
> users, backwards compatibility.  This means that it just isn't
> possible to get a consensus about moving in this direction.
> 
> Using decimal floating point won't fly either, for mostly the same
> reasons, plus the implementation appears to be riddled with gotcha's
> (at least rationals are relatively clean and easy to implement, given
> that we already have bignums).
> 
> I don't think I have the time or energy to argue this much further --
> someone will have to argue until they have a solution that the various
> groups (educators, scientists, and programmers) can agree on.  Maybe
> language levels will save the world?

Just out of curiosity: is there a usable decimal type implementation
somewhere on the net which we could beat on ?

I for one would be very interested in having a decimal type
around (with fixed precision and scale), since databases rely
on these a lot and I would like to assure that passing database
data through Python doesn't cause any data loss due to rounding
issues.

If there aren't any such implementations yet, the site that Tim 
mentioned  looks like a good starting point for heading into this 
direction... e.g. for mx.Decimal ;-)

	http://www2.hursley.ibm.com/decimal/

I believe that now with the coercion patches in place, adding
new numeric datatypes should be fairly easy (left aside the
problems intrinsic to numerics themselves).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From martin@loewis.home.cs.tu-berlin.de  Thu Mar 15 16:30:49 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 17:30:49 +0100
Subject: [Python-Dev] Patch Manager Guidelines
Message-ID: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>

It appears that the Patch Manager Guidelines
(http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
tracker tool anymore. They claim that the status of the patch can be
Open, Accepted, Closed, etc - which is not true: the status can be
only Open, Closed, or Deleted; Accepted is a value of Resolution.

I have to following specific questions: If a patch is accepted, should
it be closed also? If so, how should the resolution change if it is
also committed?

Curious,
Martin


From fdrake@acm.org  Thu Mar 15 16:35:19 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Thu, 15 Mar 2001 11:35:19 -0500 (EST)
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
References: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
Message-ID: <15024.61255.797524.736810@localhost.localdomain>

Martin v. Loewis writes:
 > It appears that the Patch Manager Guidelines
 > (http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
 > tracker tool anymore. They claim that the status of the patch can be
 > Open, Accepted, Closed, etc - which is not true: the status can be
 > only Open, Closed, or Deleted; Accepted is a value of Resolution.

  Thanks for pointing this out!

 > I have to following specific questions: If a patch is accepted, should
 > it be closed also? If so, how should the resolution change if it is
 > also committed?

  I've been setting a patch to accepted-but-open if it needs to be
checked in, and then closing it once the checkin has been made.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From guido@digicool.com  Thu Mar 15 16:44:54 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 11:44:54 -0500
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: Your message of "Thu, 15 Mar 2001 17:30:49 +0100."
 <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
References: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
Message-ID: <200103151644.LAA09360@cj20424-a.reston1.va.home.com>

> It appears that the Patch Manager Guidelines
> (http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
> tracker tool anymore. They claim that the status of the patch can be
> Open, Accepted, Closed, etc - which is not true: the status can be
> only Open, Closed, or Deleted; Accepted is a value of Resolution.
> 
> I have to following specific questions: If a patch is accepted, should
> it be closed also? If so, how should the resolution change if it is
> also committed?

A patch should only be closed after it has been committed; otherwise
it's too easy to lose track of it.  So I guess the proper sequence is

1. accept; Resolution set to Accepted

2. commit; Status set to Closed

I hope the owner of the sf-faq document can fix it.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Thu Mar 15 17:22:41 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 18:22:41 +0100
Subject: [Python-Dev] Preparing 2.0.1
Message-ID: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>

I've committed a few changes to the 2.0 release branch, and I'd
propose to follow the following procedure when doing so:

- In the checkin message, indicate which file version from the
  mainline is being copied into the release branch.

- In Misc/NEWS, indicate what bugs have been fixed by installing these
  patches. If it was a patch in response to a SF bug report, listing
  the SF bug id should be sufficient; I've put some instructions into
  Misc/NEWS on how to retrieve the bug report for a bug id.

I'd also propose that 2.0.1, at a minimum, should contain the patches
listed on the 2.0 MoinMoin

http://www.python.org/cgi-bin/moinmoin

I've done so only for the _tkinter patch, which was both listed as
critical, and which closed 2 SF bug reports. I've verified that the
sre_parse patch also closes a number of SF bug reports, but have not
copied it to the release branch.

Please let me know what you think.

Martin


From guido@digicool.com  Thu Mar 15 17:39:32 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 12:39:32 -0500
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: Your message of "Thu, 15 Mar 2001 18:22:41 +0100."
 <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
Message-ID: <200103151739.MAA09627@cj20424-a.reston1.va.home.com>

Excellent, Martin!

There's way more by way of patches that we *could* add than the
MoinMoin Wiki though.

I hope that somebody has the time to wade through the 2.1 code to look
for gems.  These should all be *pure* bugfixes!

I haven't seen Aahz' PEP in detail yet; I don't hope there's a
requirement that 2.0.1 come out before 2.1?  The licensing stuff may
be holding 2.0.1 up. :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fredrik@effbot.org  Thu Mar 15 18:15:17 2001
From: fredrik@effbot.org (Fredrik Lundh)
Date: Thu, 15 Mar 2001 19:15:17 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
Message-ID: <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid>

Martin wrote:
> I've verified that the sre_parse patch also closes a number of SF
> bug reports, but have not copied it to the release branch.

it's probably best to upgrade to the current SRE code base.

also, it would make sense to bump makeunicodedata.py to 1.8,
and regenerate the unicode database (this adds 38,642 missing
unicode characters).

I'll look into this this weekend, if I find the time.

Cheers /F



From mwh21@cam.ac.uk  Thu Mar 15 18:28:48 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: Thu, 15 Mar 2001 18:28:48 +0000 (GMT)
Subject: [Python-Dev] python-dev summary, 2001-03-01 - 2001-03-15
Message-ID: <Pine.LNX.4.10.10103151820200.24973-100000@localhost.localdomain>

 This is a summary of traffic on the python-dev mailing list between
 Mar 1 and Mar 14 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list@python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the third python-dev summary written by Michael Hudson.
 Previous summaries were written by Andrew Kuchling and can be found
 at:

   <http://www.amk.ca/python/dev/>

 New summaries will appear at:

  <http://starship.python.net/crew/mwh/summaries/>

 and will continue to be archived at Andrew's site.

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 264

    50 |                                             ]|[        
       |                                             ]|[        
       |                                             ]|[        
       |                                             ]|[        
    40 | ]|[                                         ]|[        
       | ]|[                                         ]|[        
       | ]|[                                         ]|[        
       | ]|[                                         ]|[ ]|[    
    30 | ]|[                                         ]|[ ]|[    
       | ]|[                                         ]|[ ]|[    
       | ]|[                                         ]|[ ]|[ ]|[
       | ]|[                                         ]|[ ]|[ ]|[
    20 | ]|[                                         ]|[ ]|[ ]|[
       | ]|[ ]|[                                     ]|[ ]|[ ]|[
       | ]|[ ]|[                                     ]|[ ]|[ ]|[
       | ]|[ ]|[                                 ]|[ ]|[ ]|[ ]|[
    10 | ]|[ ]|[ ]|[                             ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[     ]|[                     ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[     ]|[ ]|[                 ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[ ]|[ ]|[ ]|[
     0 +-050-022-012-004-009-006-003-002-003-005-017-059-041-031
        Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13|
            Fri 02  Sun 04  Tue 06  Thu 08  Sat 10  Mon 12  Wed 14

 A quiet fortnight on python-dev; the conference a week ago is
 responsible for some of that, but also discussion has been springing
 up on other mailing lists (including the types-sig, doc-sig,
 python-iter and stackless lists, and those are just the ones your
 author is subscribed to).


   * Bug Fix Releases *

 Aahz posted a proposal for a 2.0.1 release, fixing the bugs that have
 been found in 2.0 but not adding the new features.

  <http://mail.python.org/pipermail/python-dev/2001-March/013389.html>

 Guido's response was, essentially, "Good idea, but I don't have the
 time to put into it", and that the wider community would have to put
 in some of the donkey work if this is going to happen.  Signs so far
 are encouraging.


    * Numerics *

 Moshe Zadka posted three new PEP-drafts:

  <http://mail.python.org/pipermail/python-dev/2001-March/013435.html>

 which on discussion became four new PEPs, which are not yet online
 (hint, hint).

 The four titles are

    Unifying Long Integers and Integers
    Non-integer Division
    Adding a Rational Type to Python
    Adding a Rational Literal to Python

 and they will appear fairly soon at

  <http://python.sourceforge.net/peps/pep-0237.html>
  <http://python.sourceforge.net/peps/pep-0238.html>
  <http://python.sourceforge.net/peps/pep-0239.html>
  <http://python.sourceforge.net/peps/pep-0240.html>

 respectively.

 Although pedantically falling slightly out of the remit of this
 summary, I should mention Guido's partial BDFL pronouncement:

  <http://mail.python.org/pipermail/python-dev/2001-March/013587.html>

 A new mailing list had been setup to discuss these issues:

  <http://lists.sourceforge.net/lists/listinfo/python-numerics>


    * Revive the types-sig? *

 Paul Prescod has single-handedly kicked the types-sig into life
 again.

  <http://mail.python.org/sigs/types-sig/>

 The discussion this time seems to be centered on interfaces and how to
 use them effectively.  You never know, we might get somewhere this
 time!

    * stackless *

 Jeremy Hylton posted some comments on Gordon McMillan's new draft of
 the stackless PEP (PEP 219) and the stackless dev day discussion at
 Spam 9.

  <http://mail.python.org/pipermail/python-dev/2001-March/013494.html>

 The discussion has mostly focussed on technical issues; there has
 been no comment on if or when the core Python will become stackless.


    * miscellanea *

 There was some discussion on nested scopes, but mainly on
 implementation issues.  Thomas Wouters promised <wink> to sort out
 the "continue in finally: clause" wart.

Cheers,
M.



From esr@thyrsus.com  Thu Mar 15 18:35:30 2001
From: esr@thyrsus.com (Eric)
Date: Thu, 15 Mar 2001 10:35:30 -0800
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: <200103142305.SAA05872@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Wed, Mar 14, 2001 at 06:05:50PM -0500
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com>
Message-ID: <20010315103530.C1530@thyrsus.com>

Guido van Rossum <guido@digicool.com>:
> > I have fixed some obvious errors (use of the deprecated 'cmp' module;
> > use of regex) but I have encountered run-time errors that are beyond
> > my competence to fix.  From a cursory inspection of the code it looks
> > to me like the freeze tools need adaptation to the new
> > distutils-centric build process.
> 
> The last maintainers were me and Mark Hammond, but neither of us has
> time to look into this right now.  (At least I know I don't.)
> 
> What kind of errors do you encounter?

After cleaning up the bad imports, use of regex, etc, first thing I see
is an assertion failure in the module finder.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

"They that can give up essential liberty to obtain a little temporary 
safety deserve neither liberty nor safety."
	-- Benjamin Franklin, Historical Review of Pennsylvania, 1759.


From guido@digicool.com  Thu Mar 15 18:49:21 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 13:49:21 -0500
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: Your message of "Thu, 15 Mar 2001 10:35:30 PST."
 <20010315103530.C1530@thyrsus.com>
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com>
 <20010315103530.C1530@thyrsus.com>
Message-ID: <200103151849.NAA09878@cj20424-a.reston1.va.home.com>

> > What kind of errors do you encounter?
> 
> After cleaning up the bad imports, use of regex, etc, first thing I see
> is an assertion failure in the module finder.

Are you sure you are using the latest CVS version of freeze?  I didn't
have to clean up any bad imports -- it just works for me.  But maybe
I'm not using all the features?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Thu Mar 15 18:49:37 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 19:49:37 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> (fredrik@effbot.org)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid>
Message-ID: <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de>

> it's probably best to upgrade to the current SRE code base.

I'd be concerned about the "pure bugfix" nature of the current SRE
code base. It is probably minor things, like the addition of

+    PyDict_SetItemString(
+        d, "MAGIC", (PyObject*) PyInt_FromLong(SRE_MAGIC)
+        );

+# public symbols
+__all__ = [ "match", "search", "sub", "subn", "split", "findall",
+    "compile", "purge", "template", "escape", "I", "L", "M", "S", "X",
+    "U", "IGNORECASE", "LOCALE", "MULTILINE", "DOTALL", "VERBOSE",
+    "UNICODE", "error" ]
+

+DEBUG = sre_compile.SRE_FLAG_DEBUG # dump pattern after compilation

-    def getgroup(self, name=None):
+    def opengroup(self, name=None):

The famous last words here are "those changes can do no
harm". However, somebody might rely on Pattern objects having a
getgroup method (even though it is not documented). Some code (relying
on undocumented features) may break with 2.1, which is acceptable; it
is not acceptable for a bugfix release.

For the bugfix release, I'd feel much better if a clear set of pure
bug fixes were identified, along with a list of bugs they fix. So "no
new feature" would rule out "no new constant named MAGIC" (*).

If a "pure bugfix" happens to break something as well, we can atleast
find out what it fixed in return, and then probably find that the fix
justified the breakage.

Regards,
Martin

(*) There are also new constants AT_BEGINNING_STRING, but it appears
that it was introduced as response to a bug report.


From esr@thyrsus.com  Thu Mar 15 18:54:17 2001
From: esr@thyrsus.com (Eric)
Date: Thu, 15 Mar 2001 10:54:17 -0800
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: <200103151849.NAA09878@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 15, 2001 at 01:49:21PM -0500
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com> <20010315103530.C1530@thyrsus.com> <200103151849.NAA09878@cj20424-a.reston1.va.home.com>
Message-ID: <20010315105417.J1530@thyrsus.com>

Guido van Rossum <guido@digicool.com>:
> Are you sure you are using the latest CVS version of freeze?  I didn't
> have to clean up any bad imports -- it just works for me.  But maybe
> I'm not using all the features?

I'll cvs update and check.  Thanks.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Still, if you will not fight for the right when you can easily
win without bloodshed, if you will not fight when your victory
will be sure and not so costly, you may come to the moment when
you will have to fight with all the odds against you and only a
precarious chance for survival. There may be a worse case.  You
may have to fight when there is no chance of victory, because it
is better to perish than to live as slaves.
	--Winston Churchill


From skip@pobox.com (Skip Montanaro)  Thu Mar 15 19:14:59 2001
From: skip@pobox.com (Skip Montanaro) (Skip Montanaro)
Date: Thu, 15 Mar 2001 13:14:59 -0600 (CST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103150614.BAA04221@panix6.panix.com>
References: <200103150614.BAA04221@panix6.panix.com>
Message-ID: <15025.5299.651586.244121@beluga.mojam.com>

    aahz> Starting with Python 2.0, all feature releases are required to
    aahz> have the form X.Y; patch releases will always be of the form
    aahz> X.Y.Z.  To clarify the distinction between a bug fix release and a
    aahz> patch release, all non-bug fix patch releases will have the suffix
    aahz> "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
    aahz> bug fix release; and "2.1.2p" is a patch release that contains
    aahz> minor feature enhancements.

I don't understand the need for (or fundamental difference between) bug fix
and patch releases.  If 2.1 is the feature release and 2.1.1 is a bug fix
release, is 2.1.2p a branch off of 2.1.2 or 2.1.1?

    aahz> The Patch Czar is the counterpart to the BDFL for patch releases.
    aahz> However, the BDFL and designated appointees retain veto power over
    aahz> individual patches and the decision of whether to label a patch
    aahz> release as a bug fix release.

I propose that instead of (or in addition to) the Patch Czar you have a
Release Shepherd (RS) for each feature release, presumably someone motivated
to help maintain that particular release.  This person (almost certainly
someone outside PythonLabs) would be responsible for the bug fix releases
associated with a single feature release.  Your use of 2.1's sre as a "small
feature change" for 2.0 and 1.5.2 is an example where having an RS for each
feature release would be worthwhile.  Applying sre 2.1 to the 2.0 source
would probably be reasonably easy.  Adding it to 1.5.2 would be much more
difficult (no Unicode), and so would quite possibly be accepted by the 2.0
RS and rejected by the 1.5.2 RS.

As time passes, interest in further bug fix releases for specific feature
releases will probably wane.  When interest drops far enough the RS could
simply declare that branch closed and move on to other things.

I envision the Patch Czar voting a general yea or nay on a specific patch,
then passing it along to all the current RSs, who would make the final
decision about whether that patch is appropriate for the release they are
managing.

I suggest dumping the patch release concept and just going with bug fix
releases.  The system will be complex enough without them.  If it proves
desirable later, you can always add them.

Skip


From fredrik@effbot.org  Thu Mar 15 19:25:45 2001
From: fredrik@effbot.org (Fredrik Lundh)
Date: Thu, 15 Mar 2001 20:25:45 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de>
Message-ID: <03d101c0ad85$bc812610$e46940d5@hagrid>

martin wrote:

> I'd be concerned about the "pure bugfix" nature of the current SRE
> code base. 

well, unlike you, I wrote the code.

> -    def getgroup(self, name=None):
> +    def opengroup(self, name=None):
> 
> The famous last words here are "those changes can do no
> harm". However, somebody might rely on Pattern objects having a
> getgroup method (even though it is not documented).

it may sound weird, but I'd rather support people who rely on regular
expressions working as documented...

> For the bugfix release, I'd feel much better if a clear set of pure
> bug fixes were identified, along with a list of bugs they fix. So "no
> new feature" would rule out "no new constant named MAGIC" (*).

what makes you so sure that MAGIC wasn't introduced to deal with
a bug report?  (hint: it was)

> If a "pure bugfix" happens to break something as well, we can atleast
> find out what it fixed in return, and then probably find that the fix
> justified the breakage.

more work, and far fewer bugs fixed.  let's hope you have lots of
volunteers lined up...

Cheers /F



From fredrik@pythonware.com  Thu Mar 15 19:43:11 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Thu, 15 Mar 2001 20:43:11 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
References: <200103150614.BAA04221@panix6.panix.com> <15025.5299.651586.244121@beluga.mojam.com>
Message-ID: <000f01c0ad88$2cd4b970$e46940d5@hagrid>

skip wrote:
> I suggest dumping the patch release concept and just going with bug fix
> releases.  The system will be complex enough without them.  If it proves
> desirable later, you can always add them.

agreed.

> Applying sre 2.1 to the 2.0 source would probably be reasonably easy.
> Adding it to 1.5.2 would be much more difficult (no Unicode), and so
> would quite possibly be accepted by the 2.0 RS and rejected by the
> 1.5.2 RS.

footnote: SRE builds and runs just fine under 1.5.2:

    http://www.pythonware.com/products/sre

Cheers /F



From thomas.heller@ion-tof.com  Thu Mar 15 20:00:19 2001
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Thu, 15 Mar 2001 21:00:19 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
Message-ID: <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>

[Martin v. Loewis]
> I'd also propose that 2.0.1, at a minimum, should contain the patches
> listed on the 2.0 MoinMoin
> 
> http://www.python.org/cgi-bin/moinmoin
> 
So how should requests for patches be submitted?
Should I enter them into the wiki, post to python-dev,
email to aahz?

I would kindly request two of the fixed bugs I reported to
go into 2.0.1:

Bug id 231064, sys.path not set correctly in embedded python interpreter
Bug id 221965, 10 in xrange(10) returns 1
(I would consider the last one as critical)

Thomas



From aahz@pobox.com  Thu Mar 15 20:11:31 2001
From: aahz@pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 12:11:31 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Thomas Heller" at Mar 15, 2001 09:00:19 PM
Message-ID: <200103152011.PAA28835@panix3.panix.com>

> So how should requests for patches be submitted?
> Should I enter them into the wiki, post to python-dev,
> email to aahz?

As you'll note in PEP 6, this is one of the issues that needs some
resolving.  The correct solution long-term will likely involve some
combination of a new mailing list (so python-dev doesn't get overwhelmed)
and SourceForge bug management.  In the meantime, I'm keeping a record.

Part of the problem in simply moving forward is that I am neither on
python-dev myself nor do I have CVS commit privileges; I'm also not much
of a C programmer.  Thomas Wouters and Jeremy Hylton have made statements
that could be interpreted as saying that they're willing to be the Patch
Czar, but while I assume that either would be passed by acclamation, I'm
certainly not going to shove it on them.  If either accepts, I'll be glad
to take on whatever administrative tasks they ask for.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"The overexamined life sure is boring."  --Loyal Mini Onion


From martin@loewis.home.cs.tu-berlin.de  Thu Mar 15 20:39:14 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 21:39:14 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <03d101c0ad85$bc812610$e46940d5@hagrid> (fredrik@effbot.org)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de> <03d101c0ad85$bc812610$e46940d5@hagrid>
Message-ID: <200103152039.f2FKdEQ22768@mira.informatik.hu-berlin.de>

> > I'd be concerned about the "pure bugfix" nature of the current SRE
> > code base. 
> 
> well, unlike you, I wrote the code.

I am aware of that. My apologies if I suggested otherwise.

> it may sound weird, but I'd rather support people who rely on regular
> expressions working as documented...

That is not weird at all.

> > For the bugfix release, I'd feel much better if a clear set of pure
> > bug fixes were identified, along with a list of bugs they fix. So "no
> > new feature" would rule out "no new constant named MAGIC" (*).
> 
> what makes you so sure that MAGIC wasn't introduced to deal with
> a bug report?  (hint: it was)

I am not sure. What was the bug report that caused its introduction?

> > If a "pure bugfix" happens to break something as well, we can atleast
> > find out what it fixed in return, and then probably find that the fix
> > justified the breakage.
> 
> more work, and far fewer bugs fixed.  let's hope you have lots of
> volunteers lined up...

Nobody has asked *you* to do that work. If you think your time is
better spent in fixing existing bugs instead of back-porting the fixes
to 2.0 - there is nothing wrong with that at all. It all depends on
what the volunteers are willing to do.

Regards,
Martin


From guido@digicool.com  Thu Mar 15 21:14:16 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 16:14:16 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Thu, 15 Mar 2001 20:43:11 +0100."
 <000f01c0ad88$2cd4b970$e46940d5@hagrid>
References: <200103150614.BAA04221@panix6.panix.com> <15025.5299.651586.244121@beluga.mojam.com>
 <000f01c0ad88$2cd4b970$e46940d5@hagrid>
Message-ID: <200103152114.QAA10305@cj20424-a.reston1.va.home.com>

> skip wrote:
> > I suggest dumping the patch release concept and just going with bug fix
> > releases.  The system will be complex enough without them.  If it proves
> > desirable later, you can always add them.
> 
> agreed.

+1

> > Applying sre 2.1 to the 2.0 source would probably be reasonably easy.
> > Adding it to 1.5.2 would be much more difficult (no Unicode), and so
> > would quite possibly be accepted by the 2.0 RS and rejected by the
> > 1.5.2 RS.
> 
> footnote: SRE builds and runs just fine under 1.5.2:
> 
>     http://www.pythonware.com/products/sre

In the specific case of SRE, I'm +1 on keeping the code base in 2.0.1
completely synchronized with 2.1.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Thu Mar 15 21:32:47 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 22:32:47 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
 (thomas.heller@ion-tof.com)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
Message-ID: <200103152132.f2FLWlE29312@mira.informatik.hu-berlin.de>

> So how should requests for patches be submitted?
> Should I enter them into the wiki, post to python-dev,
> email to aahz?

Personally, I think 2.0.1 should be primarily driven by user requests;
I think this is also the spirit of the PEP. I'm not even sure that
going over the entire code base systematically and copying all bug
fixes is a good idea.

In that sense, having somebody collect these requests is probably the
right approach. In this specific case, I'll take care of them, unless
somebody else proposes a different procedure. For the record, you are
requesting inclusion of

rev 1.23 of PC/getpathp.c
rev 2.21, 2.22 of Objects/rangeobject.c
rev 1.20 of Lib/test/test_b2.py

Interestingly enough, 2.22 of rangeobject.c also adds three attributes
to the xrange object: start, stop, and step. That is clearly a new
feature, so should it be moved into 2.0.1? Otherwise, the fix must be
back-ported to 2.0.

I think it we need a policy decision here, which could probably take
one of three outcomes:
1. everybody with CVS commit access can decide to move patches from
   the mainline to the branch. That would mean I could move these
   patches, and Fredrik Lundh could install the sre code base as-is.

2. the author of the original patch can make that decision. That would
   mean that Fredrik Lundh can still install his code as-is, but I'd
   have to ask Fred's permission.

3. the bug release coordinator can make that decision. That means that
   Aahz must decide.

If it is 1 or 2, some guideline is probably needed as to what exactly
is suitable for inclusion into 2.0.1. Guido has requested "*pure*
bugfixes", which, to me, says

a) sre must be carefully reviewed change for change
b) the three attributes on xrange objects must not appear in 2.0.1

In any case, I'm in favour of a much more careful operation for a
bugfix release. That probably means not all bugs that have been fixed
already will be fixed in 2.0.1; I would not expect otherwise.

Regards,
Martin


From aahz@pobox.com  Thu Mar 15 22:21:12 2001
From: aahz@pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 14:21:12 -0800 (PST)
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 15, 2001 06:22:41 PM
Message-ID: <200103152221.RAA16060@panix3.panix.com>

> - In the checkin message, indicate which file version from the
>   mainline is being copied into the release branch.

Sounds good.

> - In Misc/NEWS, indicate what bugs have been fixed by installing these
>   patches. If it was a patch in response to a SF bug report, listing
>   the SF bug id should be sufficient; I've put some instructions into
>   Misc/NEWS on how to retrieve the bug report for a bug id.

Good, too.

> I've done so only for the _tkinter patch, which was both listed as
> critical, and which closed 2 SF bug reports. I've verified that the
> sre_parse patch also closes a number of SF bug reports, but have not
> copied it to the release branch.

I'm a little concerned that the 2.0 branch is being updated without a
2.0.1 target created, but it's quite possible my understanding of how
this should work is faulty.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From aahz@pobox.com  Thu Mar 15 22:34:26 2001
From: aahz@pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 14:34:26 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Skip Montanaro" at Mar 15, 2001 01:14:59 PM
Message-ID: <200103152234.RAA16951@panix3.panix.com>

>     aahz> Starting with Python 2.0, all feature releases are required to
>     aahz> have the form X.Y; patch releases will always be of the form
>     aahz> X.Y.Z.  To clarify the distinction between a bug fix release and a
>     aahz> patch release, all non-bug fix patch releases will have the suffix
>     aahz> "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
>     aahz> bug fix release; and "2.1.2p" is a patch release that contains
>     aahz> minor feature enhancements.
> 
> I don't understand the need for (or fundamental difference between) bug fix
> and patch releases.  If 2.1 is the feature release and 2.1.1 is a bug fix
> release, is 2.1.2p a branch off of 2.1.2 or 2.1.1?

That's one of the issues that needs to be resolved if we permit both
patch releases and bug fix releases.  My preference would be that 2.1.2p
is a branch from 2.1.1.

>     aahz> The Patch Czar is the counterpart to the BDFL for patch releases.
>     aahz> However, the BDFL and designated appointees retain veto power over
>     aahz> individual patches and the decision of whether to label a patch
>     aahz> release as a bug fix release.
> 
> I propose that instead of (or in addition to) the Patch Czar you have a
> Release Shepherd (RS) for each feature release, presumably someone motivated
> to help maintain that particular release.  This person (almost certainly
> someone outside PythonLabs) would be responsible for the bug fix releases
> associated with a single feature release.  Your use of 2.1's sre as a "small
> feature change" for 2.0 and 1.5.2 is an example where having an RS for each
> feature release would be worthwhile.  Applying sre 2.1 to the 2.0 source
> would probably be reasonably easy.  Adding it to 1.5.2 would be much more
> difficult (no Unicode), and so would quite possibly be accepted by the 2.0
> RS and rejected by the 1.5.2 RS.

That may be a good idea.  Comments from others?  (Note that in the case
of sre, I was aware that Fredrik had already backported to both 2.0 and
1.5.2.)

> I suggest dumping the patch release concept and just going with bug fix
> releases.  The system will be complex enough without them.  If it proves
> desirable later, you can always add them.

Well, that was my original proposal before turning this into an official
PEP.  The stumbling block was the example of the case-sensitive import
patch (that permits Python's use on BeOS and MacOS X) for 2.1.  Both
Guido and Tim stated their belief that this was a "feature" and not a
"bug fix" (and I don't really disagree with them).  This leaves the
following options (assuming that backporting the import fix doesn't break
one of the Prohibitions):

* Change the minds of Guido/Tim to make the import issue a bugfix.

* Don't backport case-sensitive imports to 2.0.

* Permit minor feature additions/changes.

If we choose that last option, I believe a distinction should be drawn
between releases that contain only bugfixes and releases that contain a
bit more.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From thomas@xs4all.net  Thu Mar 15 22:37:37 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:37:37 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103150614.BAA04221@panix6.panix.com>; from aahz@panix.com on Thu, Mar 15, 2001 at 01:14:54AM -0500
References: <200103150614.BAA04221@panix6.panix.com>
Message-ID: <20010315233737.B29286@xs4all.nl>

On Thu, Mar 15, 2001 at 01:14:54AM -0500, aahz@panix.com wrote:
> [posted to c.l.py.announce and c.l.py; followups to c.l.py; cc'd to
> python-dev]

>     Patch releases are required to adhere to the following
>     restrictions:

>     1. There must be zero syntax changes.  All .pyc and .pyo files
>        must work (no regeneration needed) with all patch releases
>        forked off from a feature release.

Hmm... Would making 'continue' work inside 'try' count as a bugfix or as a
feature ? It's technically not a syntax change, but practically it is.
(Invalid syntax suddenly becomes valid.) 

>   Bug Fix Releases

>     Bug fix releases are a subset of all patch releases; it is
>     prohibited to add any features to the core in a bug fix release.
>     A patch release that is not a bug fix release may contain minor
>     feature enhancements, subject to the Prohibitions section.

I'm not for this 'bugfix release', 'patch release' difference. The
numbering/naming convention is too confusing, not clear enough, and I don't
see the added benifit of adding limited features. If people want features,
they should go and get a feature release. The most important bit in patch
('bugfix') releases is not to add more bugs, and rewriting parts of code to
fix a bug is something that is quite likely to insert more bugs. Sure, as
the patch coder, you are probably certain there are no bugs -- but so was
whoever added the bug in the first place :)

>     The Patch Czar decides when there are a sufficient number of
>     patches to warrant a release.  The release gets packaged up,
>     including a Windows installer, and made public as a beta release.
>     If any new bugs are found, they must be fixed and a new beta
>     release publicized.  Once a beta cycle completes with no new bugs
>     found, the package is sent to PythonLabs for certification and
>     publication on python.org.

>     Each beta cycle must last a minimum of one month.

This process probably needs a firm smack with reality, but that would have
to wait until it meets some, first :) Deciding when to do a bugfix release
is very tricky: some bugs warrant a quick release, but waiting to assemble
more is generally a good idea. The whole beta cycle and windows
installer/RPM/etc process is also a bottleneck. Will Tim do the Windows
Installer (or whoever does it for the regular releases) ? If he's building
the installer anyway, why can't he 'bless' the release right away ?

I'm also not sure if a beta cycle in a bugfix release is really necessary,
especially a month long one. Given that we have a feature release planned
each 6 months, and a feature release has generally 2 alphas and 2 betas,
plus sometimes a release candidate, plus the release itself, and a bugfix
release would have one or two betas too, and say that we do two betas in
those six months, that would make 10+ 'releases' of various form in those 6
months. Ain't no-one[*] going to check them out for a decent spin, they'll
just wait for the final version.

>     Should the first patch release following any feature release be
>     required to be a bug fix release?  (Aahz proposes "yes".)
>     Is it allowed to do multiple forks (e.g. is it permitted to have
>     both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)
>     Does it makes sense for a bug fix release to follow a patch
>     release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)

More reasons not to have separate featurebugfixreleasethingies and
bugfix-releases :)

>     What is the equivalent of python-dev for people who are
>     responsible for maintaining Python?  (Aahz proposes either
>     python-patch or python-maint, hosted at either python.org or
>     xs4all.net.)

It would probably never be hosted at .xs4all.net. We use the .net address
for network related stuff, and as a nice Personality Enhancer (read: IRC
dick extender) for employees. We'd be happy to host stuff, but I would
actually prefer to have it under a python.org or some other python-related
domainname. That forestalls python questions going to admin@xs4all.net :) A
small logo somewhere on the main page would be nice, but stuff like that
should be discussed if it's ever an option, not just because you like the
name 'XS4ALL' :-)

>     Does SourceForge make it possible to maintain both separate and
>     combined bug lists for multiple forks?  If not, how do we mark
>     bugs fixed in different forks?  (Simplest is to simply generate a
>     new bug for each fork that it gets fixed in, referring back to the
>     main bug number for details.)

We could make it a separate SF project, just for the sake of keeping
bugreports/fixes in the maintenance branch and the head branch apart. The
main Python project already has an unwieldy number of open bugreports and
patches.

I'm also for starting the maintenance branch right after the real release,
and start adding bugfixes to it right away, as soon as they show up. Keeping
up to date on bufixes to the head branch is then as 'simple' as watching
python-checkins. (Up until the fact a whole subsystem gets rewritten, that
is :) People should still be able to submit bugfixes for the maintenance
branch specifically.

And I'm still willing to be the patch monkey, though I don't think I'm the
only or the best candidate. I'll happily contribute regardless of who gets
the blame :)

[*] There, that better, Moshe ?
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Thu Mar 15 22:44:21 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:44:21 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103152234.RAA16951@panix3.panix.com>; from aahz@pobox.com on Thu, Mar 15, 2001 at 02:34:26PM -0800
References: <no.id> <200103152234.RAA16951@panix3.panix.com>
Message-ID: <20010315234421.C29286@xs4all.nl>

On Thu, Mar 15, 2001 at 02:34:26PM -0800, Aahz Maruch wrote:

[ How to get case-insensitive import fixed in 2.0.x ]

> * Permit minor feature additions/changes.

> If we choose that last option, I believe a distinction should be drawn
> between releases that contain only bugfixes and releases that contain a
> bit more.

We could make the distinction in the release notes. It could be a
'PURE BUGFIX RELEASE' or a 'FEATURE FIX RELEASE'. Bugfix releases just fix
bugs, that is, wrong behaviour. feature fix releases fix misfeatures, like
the case insensitive import issues. The difference between the two should be
explained in the paragraph following the header, for *each* release. For
example,

This is a 		PURE BUGFIX RELEASE.
This means that it only fixes behaviour that was previously giving an error,
or providing obviously wrong results. Only code relying out the outcome of
obviously incorrect code can be affected.

and

This is a 		FEATURE FIX RELEASE
This means that the (unexpected) behaviour of one or more features was
changed. This is a low-impact change that is unlikely to affect anyone, but
it is theoretically possible. See below for a list of possible effects: 
[ list of mis-feature-fixes and their result. ]

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From greg@cosc.canterbury.ac.nz  Thu Mar 15 22:45:50 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 11:45:50 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB0B863.52DFB61C@tismer.com>
Message-ID: <200103152245.LAA05494@s454.cosc.canterbury.ac.nz>

> But most probably, it will run interpreters from time to time.
> These can be told to take the scheduling role on.

You'll have to expand on that. My understanding is that
all the uthreads would have to run in a single C-level
interpreter invocation which can never be allowed to
return. I don't see how different interpreters can be
made to "take on" this role. If that were possible,
there wouldn't be any problem in the first place.

> It does not matter on which interpreter level we are,
> we just can't switch to frames of other levels. But
> even leaving a frame chain, and re-entering later
> with a different stack level is no problem.

You'll have to expand on that, too. Those two sentences
sound contradictory to me.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From thomas@xs4all.net  Thu Mar 15 22:54:08 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:54:08 +0100
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <200103152221.RAA16060@panix3.panix.com>; from aahz@pobox.com on Thu, Mar 15, 2001 at 02:21:12PM -0800
References: <no.id> <200103152221.RAA16060@panix3.panix.com>
Message-ID: <20010315235408.D29286@xs4all.nl>

On Thu, Mar 15, 2001 at 02:21:12PM -0800, Aahz Maruch wrote:

> I'm a little concerned that the 2.0 branch is being updated without a
> 2.0.1 target created, but it's quite possible my understanding of how
> this should work is faulty.

Probably (no offense intended) :) A maintenance branch was created together
with the release tag. A branch is a tag with an even number of dots. You can
either use cvs commit magic to commit a version to the branch, or you can
checkout a new tree or update a current tree with the branch-tag given in a
'-r' option. The tag then becomes sticky: if you run update again, it will
update against the branch files. If you commit, it will commit to the branch
files.

I keep the Mailman 2.0.x and 2.1 (head) branches in two different
directories, the 2.0-branch one checked out with:

cvs -d twouters@cvs.mailman.sourceforge.net:/cvsroot/mailman co -r \
Release_2_0_1-branch mailman; mv mailman mailman-2.0.x

It makes for very administration between releases. The one time I tried to
automatically import patches between two branches, I fucked up Mailman 2.0.2
and Barry had to release 2.0.3 less than a week later ;)

When you have a maintenance branch and you want to make a release in it, you
simply update your tree to the current state of that branch, and tag all the
files with tag (in Mailman) Release_2_0_3. You can then check out
specifically those files (and not changes that arrived later) and make a
tarball/windows install out of them.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From aahz@pobox.com  Thu Mar 15 23:17:29 2001
From: aahz@pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 15:17:29 -0800 (PST)
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <20010315235408.D29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:54:08 PM
Message-ID: <200103152317.SAA04392@panix2.panix.com>

Thanks.  Martin already cleared it up for me in private e-mail.  This
kind of knowledge lack is why I shouldn't be the Patch Czar, at least
not initially.  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From greg@cosc.canterbury.ac.nz  Thu Mar 15 23:29:52 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:29:52 +1300 (NZDT)
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>
Message-ID: <200103152329.MAA05500@s454.cosc.canterbury.ac.nz>

Tim Peters <tim.one@home.com>:
> [Guido]
>> Using decimal floating point won't fly either,
> If you again mean "by default", also agreed.

But if it's *not* by default, it won't stop naive users
from getting tripped up.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From aahz@pobox.com  Thu Mar 15 23:44:05 2001
From: aahz@pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 15:44:05 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315234421.C29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:44:21 PM
Message-ID: <200103152344.SAA06969@panix2.panix.com>

Thomas Wouters:
>
> [ How to get case-insensitive import fixed in 2.0.x ]
> 
> Aahz:
>>
>> * Permit minor feature additions/changes.
>> 
>> If we choose that last option, I believe a distinction should be drawn
>> between releases that contain only bugfixes and releases that contain a
>> bit more.
> 
> We could make the distinction in the release notes. It could be a
> 'PURE BUGFIX RELEASE' or a 'FEATURE FIX RELEASE'. Bugfix releases just fix
> bugs, that is, wrong behaviour. feature fix releases fix misfeatures, like
> the case insensitive import issues. The difference between the two should be
> explained in the paragraph following the header, for *each* release. For
> example,

I shan't whine if BDFL vetoes it, but I think this info ought to be
encoded in the version number.  Other than that, it seems that we're
mostly quibbling over wording, and it doesn't matter much to me how we
do it; your suggestion is fine with me.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From greg@cosc.canterbury.ac.nz  Thu Mar 15 23:46:07 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:46:07 +1300 (NZDT)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103152234.RAA16951@panix3.panix.com>
Message-ID: <200103152346.MAA05504@s454.cosc.canterbury.ac.nz>

aahz@pobox.com (Aahz Maruch):

> My preference would be that 2.1.2p is a branch from 2.1.1.

That could be a rather confusing numbering system.

Also, once there has been a patch release, does that mean that
the previous sequence of bugfix-only releases is then closed off?

Even a minor feature addition has the potential to introduce
new bugs. Some people may not want to take even that small
risk, but still want to keep up with bug fixes, so there may
be a demand for a further bugfix release to 2.1.1 after
2.1.2p is released. How would such a release be numbered?

Seems to me that if you're going to have minor feature releases
at all, you need a four-level numbering system: W.X.Y.Z,
where Y is the minor feature release number and Z the bugfix
release number.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Mar 15 23:48:31 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:48:31 +1300 (NZDT)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315234421.C29286@xs4all.nl>
Message-ID: <200103152348.MAA05507@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas@xs4all.net>:

> This means that the (unexpected) behaviour of one or more features was
> changed. This is a low-impact change that is unlikely to affect
> anyone

Ummm... if it's so unlikely to affect anything, is it really
worth making a special release for it?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim_one@email.msn.com  Fri Mar 16 01:34:52 2001
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 15 Mar 2001 20:34:52 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103152329.MAA05500@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPJJFAA.tim_one@email.msn.com>

[Guido]
> Using decimal floating point won't fly either,

[Tim]
> If you again mean "by default", also agreed.

[Greg Ewing]
> But if it's *not* by default, it won't stop naive users
> from getting tripped up.

Naive users are tripped up by many things.  I want to stop them in *Python*
from stumbling over 1/3, not over 1./3 or 0.5.  Changing the meaning of the
latter won't fly, not at this stage in the language's life; if the language
were starting from scratch, sure, but it's not.

I have no idea why Guido is so determined that the *former* (1/3) yield
binary floating point too (as opposed to something saner, be it rationals or
decimal fp), but I'm still trying to provoke him into explaining that part
<0.5 wink>.

I believe users (both newbies and experts) would also benefit from an
explicit way to spell a saner alternative using a tagged fp notation.
Whatever that alternative may be, I want 1/3 (not 1./3. or 0.5 or 1e100) to
yield that too without futzing with tags.



From tim_one@email.msn.com  Fri Mar 16 02:25:41 2001
From: tim_one@email.msn.com (Tim Peters)
Date: Thu, 15 Mar 2001 21:25:41 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103151542.KAA09191@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPKJFAA.tim_one@email.msn.com>

[Tim]
> I think you'd have a very hard time finding any pre-college
> level teacher who wants to teach binary fp.  Your ABC experience is
> consistent with that too.

[Guido]
> "Want to", no.  But whether they're teaching Java, C++, or Pascal,
> they have no choice: if they need 0.5, they'll need binary floating
> point, whether they explain it adequately or not.  Possibly they are
> all staying away from the decimal point completely, but I find that
> hard to believe.

Pascal is the only language there with any claim to newbie friendliness
(Stroustrup's essays notwithstanding).  Along with C, it grew up in the era
of mondo expensive mainframes with expensive binary floating-point hardware
(the CDC boxes Wirth used were designed by S. Cray, and like all such were
fast-fp-at-any-cost designs).

As the earlier Kahan quote said, the massive difference between then and now
is the "innocence" of a vastly larger computer audience.  A smaller
difference is that Pascal is effectively dead now.  C++ remains constrained
by compatibility with C, although any number of decimal class libraries are
available for it, and run as fast as C++ can make them run.  The BigDecimal
class has been standard in Java since 1.1, but, since it's Java, it's so
wordy to use that it's as tedious as everything else in Java for more than
occasional use.

OTOH, from Logo to DrScheme, with ABC and REXX in between, *some* builtin
alternative to binary fp is a feature of all languages I know of that aim not
to drive newbies insane.  "Well, its non-integer arithmetic is no worse than
C++'s" is no selling point for Python.

>>>  But other educators (e.g. Randy Pausch, and the folks who did
>>> VPython) strongly recommend this based on user observation, so
>>> there's hope.

>> Alice is a red herring!  What they wanted was for 1/2 *not* to
>> mean 0.  I've read the papers and dissertations too -- there was
>> no plea for binary fp in those, just that division not throw away
>> info.

> I never said otherwise.

OK, but then I don't know what it is you were saying.  Your sentence
preceding "... strongly recommend this ..." ended:

    this would mean an approximate, binary f.p. result for 1/3, and
    this does not seem to have the support of the educators ...

and I assumed the "this" in "Randy Paush, and ... VPython strongly recommend
this" also referred to "an approximate, binary f.p. result for 1/3".  Which
they did not strongly recommend.  So I'm lost as to what you're saying they
did strongly recommend.

Other people in this thread have said that 1./3. should give an exact
rational or a decimal fp result, but I have not.  I have said 1/3 should not
be 0, but there are at least 3 schemes on the table which deliver a non-zero
result for 1/3, only one of which is to deliver a binary fp result.

> It just boils down to binary fp as the only realistic choice.

For 1./3. and 0.67 I agree (for backward compatibility), but I've seen no
identifiable argument in favor of binary fp for 1/3.  Would Alice's users be
upset if that returned a rational or decimal fp value instead?  I'm tempted
to say "of course not", but I really haven't asked them <wink>.



From tim.one@home.com  Fri Mar 16 03:16:12 2001
From: tim.one@home.com (Tim Peters)
Date: Thu, 15 Mar 2001 22:16:12 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <3AB0EE66.37E6C633@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com>

[M.-A. Lemburg]
> Just out of curiosity: is there a usable decimal type implementation
> somewhere on the net which we could beat on ?

ftp://ftp.python.org/pub/python/
    contrib-09-Dec-1999/DataStructures/FixedPoint.py

It's more than two years old, and regularly mentioned on c.l.py.  From the
tail end of the module docstring:

"""
The following Python operators and functions accept FixedPoints in the
expected ways:

    binary + - * / % divmod
        with auto-coercion of other types to FixedPoint.
        + - % divmod  of FixedPoints are always exact.
        * / of FixedPoints may lose information to rounding, in
            which case the result is the infinitely precise answer
            rounded to the result's precision.
        divmod(x, y) returns (q, r) where q is a long equal to
            floor(x/y) as if x/y were computed to infinite precision,
            and r is a FixedPoint equal to x - q * y; no information
            is lost.  Note that q has the sign of y, and abs(r) < abs(y).
    unary -
    == != < > <= >=  cmp
    min  max
    float  int  long    (int and long truncate)
    abs
    str  repr
    hash
    use as dict keys
    use as boolean (e.g. "if some_FixedPoint:" -- true iff not zero)
"""

> I for one would be very interested in having a decimal type
> around (with fixed precision and scale),

FixedPoint is unbounded "to the left" of the point but maintains a fixed and
user-settable number of (decimal) digits "after the point".  You can easily
subclass it to complain about overflow, or whatever other damn-fool thing you
think is needed <wink>.

> since databases rely on these a lot and I would like to assure
> that passing database data through Python doesn't cause any data
> loss due to rounding issues.

Define your ideal API and maybe I can implement it someday.  My employer also
has use for this.  FixedPoint.py is better suited to computation than I/O,
though, since it uses Python longs internally, and conversion between
BCD-like formats and Python longs is expensive.

> If there aren't any such implementations yet, the site that Tim
> mentioned  looks like a good starting point for heading into this
> direction... e.g. for mx.Decimal ;-)
>
> 	http://www2.hursley.ibm.com/decimal/

FYI, note that Cowlishaw is moving away from REXX's "string of ASCII digits"
representation toward a variant of BCD encoding.




From barry@digicool.com  Fri Mar 16 03:31:08 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:31:08 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
References: <200103150614.BAA04221@panix6.panix.com>
 <20010315233737.B29286@xs4all.nl>
Message-ID: <15025.35068.826947.482650@anthem.wooz.org>

Three things to keep in mind, IMO.  First, people dislike too many
choices.  As the version numbering scheme and branches go up, the
confusion level rises (it's probably like for each dot or letter you
add to the version number, the number of people who understand which
one to grab goes down an order of magnitude. :).  I don't think it
makes any sense to do more than one branch from the main trunk, and
then do bug fix releases along that branch whenever and for as long as
it seems necessary.

Second, you probably do not need a beta cycle for patch releases.
Just do the 2.0.2 release and if you've royally hosed something (which
is unlikely but possible) turn around and do the 2.0.3 release <wink>
a.s.a.p.

Third, you might want to create a web page, maybe a wiki is perfect
for this, that contains the most important patches.  It needn't
contain everything that goes into a patch release, but it can if
that's not too much trouble.  A nice explanation for each fix would
allow a user who doesn't want to apply the whole patch or upgrade to
just apply the most critical bug fixes for their application.  This
can get more complicated as the dependencies b/w patches goes up, so
it may not be feasible for all patches, or for the entire lifetime of
the maintenance branch.

-Barry


From barry@digicool.com  Fri Mar 16 03:40:51 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:40:51 -0500
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
 <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
 <200103152132.f2FLWlE29312@mira.informatik.hu-berlin.de>
Message-ID: <15025.35651.57084.276629@anthem.wooz.org>

>>>>> "MvL" == Martin v Loewis <martin@loewis.home.cs.tu-berlin.de> writes:

    MvL> In any case, I'm in favour of a much more careful operation
    MvL> for a bugfix release. That probably means not all bugs that
    MvL> have been fixed already will be fixed in 2.0.1; I would not
    MvL> expect otherwise.

I agree.  I think each patch will require careful consideration by the
patch czar, and some will be difficult calls.  You're just not going
to "fix" everything in 2.0.1 that's fixed in 2.1.  Give it your best
shot and keep the overhead for making a new patch release low.  That
way, if you screw up or get a hue and cry for not including a patch
everyone else considers critical, you can make a new patch release
fairly soon thereafter.

-Barry


From barry@digicool.com  Fri Mar 16 03:57:40 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:57:40 -0500
Subject: [Python-Dev] Re: Preparing 2.0.1
References: <no.id>
 <200103152221.RAA16060@panix3.panix.com>
 <20010315235408.D29286@xs4all.nl>
Message-ID: <15025.36660.87154.993275@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

Thanks for the explanation Thomas, that's exactly how I manage the
Mailman trees too.  A couple of notes.

    TW> I keep the Mailman 2.0.x and 2.1 (head) branches in two
    TW> different directories, the 2.0-branch one checked out with:

    TW> cvs -d twouters@cvs.mailman.sourceforge.net:/cvsroot/mailman
    TW> co -r \ Release_2_0_1-branch mailman; mv mailman mailman-2.0.x
----------------^^^^^^^^^^^^^^^^^^^^

If I had to do it over again, I would have called this the
Release_2_0-maint branch.  I think that makes more sense when you see
the Release_2_0_X tags along that branch.

This was really my first foray back into CVS branches after my last
disaster (the string-meths branch on Python).  Things are working much
better this time, so I guess I understand how to use them now...

...except that I hit a small problem with CVS.  When I was ready to
release a new patch release along the maintenance branch, I wasn't
able to coax CVS into giving me a log between two tags on the branch.
E.g. I tried:

    cvs log -rRelease_2_0_1 -rRelease_2_0_2

(I don't actually remember at the moment whether it's specified like
this or with a colon between the release tags, but that's immaterial).

The resulting log messages did not include any of the changes between
those two branches.  However a "cvs diff" between the two tags /did/
give me the proper output, as did a "cvs log" between the branch tag
and the end of the branch.

Could have been a temporary glitch in CVS or maybe I was dipping into
the happy airplane pills a little early.  I haven't tried it again
since.

took-me-about-three-hours-to-explain-this-to-jeremy-on-the-way-to-ipc9
    -but-the-happy-airplane-pills-were-definitely-partying-in-my
    -bloodstream-at-the-time-ly y'rs,

-Barry


From tim.one@home.com  Fri Mar 16 06:34:33 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 16 Mar 2001 01:34:33 -0500
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: <200103151644.LAA09360@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEAGJGAA.tim.one@home.com>

[Martin]
> I have to following specific questions: If a patch is accepted, should
> it be closed also? If so, how should the resolution change if it is
> also committed?

[Guido]
> A patch should only be closed after it has been committed; otherwise
> it's too easy to lose track of it.  So I guess the proper sequence is
>
> 1. accept; Resolution set to Accepted
>
> 2. commit; Status set to Closed
>
> I hope the owner of the sf-faq document can fix it.

Heh -- there is no such person.  Since I wrote that Appendix to begin with, I
checked in appropriate changes:  yes, status should be Open if and only if
something still needs to be done (even if that's only a commit); status
should be Closed or Deleted if and only if nothing more should ever be done.



From tim.one@home.com  Fri Mar 16 07:02:08 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 16 Mar 2001 02:02:08 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103151539.QAA01573@core.inf.ethz.ch>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>

[Samuele Pedroni]
> ...
> I was thinking about stuff like generators used everywhere,
> but that is maybe just uninformed panicing. They are the
> kind of stuff that make programmers addictive <wink>.

Jython is to CPython as Jcon is to Icon, and *every* expression in Icon is "a
generator".

    http://www.cs.arizona.edu/icon/jcon/

is the home page, and you can get a paper from there detailing the Jcon
implementation.  It wasn't hard, and it's harder in Jcon than it would be in
Jython because Icon generators are also tied into an ubiquitous backtracking
scheme ("goal-directed evaluation").

Does Jython have an explicit object akin to CPython's execution frame?  If
so, 96.3% of what's needed for generators is already there.

At the other end of the scale, Jcon implements Icon's co-expressions (akin to
coroutines) via Java threads.



From tismer@tismer.com  Fri Mar 16 10:37:30 2001
From: tismer@tismer.com (Christian Tismer)
Date: Fri, 16 Mar 2001 11:37:30 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103152245.LAA05494@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB1ECEA.CD0FFC51@tismer.com>

This is going to be a hard task.
Well, let me give it a try...

Greg Ewing wrote:
> 
> > But most probably, it will run interpreters from time to time.
> > These can be told to take the scheduling role on.
> 
> You'll have to expand on that. My understanding is that
> all the uthreads would have to run in a single C-level
> interpreter invocation which can never be allowed to
> return. I don't see how different interpreters can be
> made to "take on" this role. If that were possible,
> there wouldn't be any problem in the first place.
> 
> > It does not matter on which interpreter level we are,
> > we just can't switch to frames of other levels. But
> > even leaving a frame chain, and re-entering later
> > with a different stack level is no problem.
> 
> You'll have to expand on that, too. Those two sentences
> sound contradictory to me.

Hmm. I can't see the contradiction yet. Let me try to explain,
maybe everything becomes obvious.

A microthread is a chain of frames.
All microthreads are sitting "below" a scheduler,
which ties them all together to a common root.
So this is a little like a tree.

There is a single interpreter who does the scheduling
and the processing.
At any time, there is
- either one thread running, or
- the scheduler itself.

As long as this interpreter is running, scheduling takes place.
But against your assumption, this interpreter can of course
return. He leaves the uthread tree structure intact and jumps
out of the scheduler, back to the calling C function.
This is doable.

But then, all the frames of the uthread tree are in a defined
state, none is currently being executed, so none is locked.
We can now use any other interpreter instance that is
created and use it to restart the scheduling process.

Maybe this clarifies it:
We cannot mix different interpreter levels *at the same time*.
It is not possible to schedule from a nested interpreter,
sincce that one needs to be unwound before.
But stopping the interpreter is a perfect unwind, and we
can start again from anywhere.
Therefore, a call-back driven UI should be no problem.

Thanks for the good question, I did never competely
think it through before.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From nas@arctrix.com  Fri Mar 16 11:37:33 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 03:37:33 -0800
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 16, 2001 at 02:02:08AM -0500
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>
Message-ID: <20010316033733.A9366@glacier.fnational.com>

On Fri, Mar 16, 2001 at 02:02:08AM -0500, Tim Peters wrote:
> Does Jython have an explicit object akin to CPython's execution frame?  If
> so, 96.3% of what's needed for generators is already there.

FWIW, I think I almost have generators working after making
fairly minor changes to frameobject.c and ceval.c.  The only
remaining problem is that ceval likes to nuke f_valuestack.  The
hairy WHY_* logic is making this hard to fix.  Based on all the
conditionals it looks like it would be similer to put this code
in the switch statement.  That would probably speed up the
interpreter to boot.  Am I missing something or should I give it
a try?

  Neil


From nas@arctrix.com  Fri Mar 16 11:43:46 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 03:43:46 -0800
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <20010316033733.A9366@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 16, 2001 at 03:37:33AM -0800
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com> <20010316033733.A9366@glacier.fnational.com>
Message-ID: <20010316034346.B9366@glacier.fnational.com>

On Fri, Mar 16, 2001 at 03:37:33AM -0800, Neil Schemenauer wrote:
> Based on all the conditionals it looks like it would be similer
> to put this code in the switch statement.

s/similer/simpler.  Its early and I have the flu, okay? :-)

  Neil


From moshez@zadka.site.co.il  Fri Mar 16 13:18:43 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Fri, 16 Mar 2001 15:18:43 +0200
Subject: [Python-Dev] [Very Long (11K)] Numeric PEPs, first public posts
Message-ID: <E14du7v-0004Xn-00@darjeeling>

After the brouhaha at IPC9, it was decided that while PEP-0228 should stay
as a possible goal, there should be more concrete PEPs suggesting specific
changes in Python numerical model, with implementation suggestions and
migration paths fleshed out. So, there are four new PEPs now, all proposing
changes to Python's numeric model. There are some connections between them,
but each is supposed to be accepted or rejected according to its own merits.

To facilitate discussion, I'm including copies of the PEPs concerned
(for reference purposes, these are PEPs 0237-0240, and the latest public
version is always in the Python CVS under non-dist/peps/ . A reasonably
up to date version is linked from http://python.sourceforge.net)

Please direct all future discussion to python-numerics@lists.sourceforge.net
This list has been especially set-up to discuss those subjects.

PEP: 237
Title: Unifying Long Integers and Integers
Version: $Revision: 1.2 $
Author: pep@zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Python has both integers (machine word size integral) types, and
    long integers (unbounded integral) types.  When integers
    operations overflow the machine registers, they raise an error.
    This PEP proposes to do away with the distinction, and unify the
    types from the perspective of both the Python interpreter and the
    C API.


Rationale

    Having the machine word size exposed to the language hinders
    portability.  For examples Python source files and .pyc's are not
    portable because of this.  Many programs find a need to deal with
    larger numbers after the fact, and changing the algorithms later
    is not only bothersome, but hinders performance in the normal
    case.


Literals

    A trailing 'L' at the end of an integer literal will stop having
    any meaning, and will be eventually phased out.  This will be done
    using warnings when encountering such literals.  The warning will
    be off by default in Python 2.2, on for 12 months, which will
    probably mean Python 2.3 and 2.4, and then will no longer be
    supported.


Builtin Functions

    The function long() will call the function int(), issuing a
    warning.  The warning will be off in 2.2, and on for two revisions
    before removing the function.  A FAQ will be added to explain that
    a solutions for old modules are:

         long=int

    at the top of the module, or:

         import __builtin__
         __builtin__.long=int

    In site.py.


C API

    All PyLong_As* will call PyInt_As*.  If PyInt_As* does not exist,
    it will be added.  Similarly for PyLong_From*.  A similar path of
    warnings as for the Python builtins will be followed.


Overflows

    When an arithmetic operation on two numbers whose internal
    representation is as machine-level integers returns something
    whose internal representation is a bignum, a warning which is
    turned off by default will be issued.  This is only a debugging
    aid, and has no guaranteed semantics.


Implementation

    The PyInt type's slot for a C long will be turned into a 

        union {
            long i;
            struct {
                unsigned long length;
                digit digits[1];
            } bignum;
        };

    Only the n-1 lower bits of the long have any meaning; the top bit
    is always set.  This distinguishes the union.  All PyInt functions
    will check this bit before deciding which types of operations to
    use.


Jython Issues

    Jython will have a PyInt interface which is implemented by both
    from PyFixNum and PyBigNum.


Open Issues

    What to do about sys.maxint?

    What to do about PyInt_AS_LONG failures?

    What do do about %u, %o, %x formatting operators?

    How to warn about << not cutting integers?

    Should the overflow warning be on a portable maximum size?

    Will unification of types and classes help with a more straightforward
    implementations?


Copyright

    This document has been placed in the public domain.


PEP: 238
Title: Non-integer Division
Version: $Revision: 1.1 $
Author: pep@zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Dividing integers currently returns the floor of the quantities.
    This behavior is known as integer division, and is similar to what
    C and FORTRAN do.  This has the useful property that all
    operations on integers return integers, but it does tend to put a
    hump in the learning curve when new programmers are surprised that

        1/2 == 0

    This proposal shows a way to change this while keeping backward
    compatibility issues in mind.


Rationale

    The behavior of integer division is a major stumbling block found
    in user testing of Python.  This manages to trip up new
    programmers regularly and even causes the experienced programmer
    to make the occasional mistake.  The workarounds, like explicitly
    coercing one of the operands to float or use a non-integer
    literal, are very non-intuitive and lower the readability of the
    program.


// Operator

    A `//' operator which will be introduced, which will call the
    nb_intdivide or __intdiv__ slots.  This operator will be
    implemented in all the Python numeric types, and will have the
    semantics of

        a // b == floor(a/b)

    Except that the type of a//b will be the type a and b will be
    coerced into.  Specifically, if a and b are of the same type, a//b
    will be of that type too.


Changing the Semantics of the / Operator

    The nb_divide slot on integers (and long integers, if these are a
    separate type, but see PEP 237[1]) will issue a warning when given
    integers a and b such that

        a % b != 0

    The warning will be off by default in the 2.2 release, and on by
    default for in the next Python release, and will stay in effect
    for 24 months.  The next Python release after 24 months, it will
    implement

        (a/b) * b = a (more or less)

    The type of a/b will be either a float or a rational, depending on
    other PEPs[2, 3].


__future__

    A special opcode, FUTURE_DIV will be added that does the
    equivalent of:

        if type(a) in (types.IntType, types.LongType):
           if type(b) in (types.IntType, types.LongType):
               if a % b != 0:
                    return float(a)/b
        return a/b

    (or rational(a)/b, depending on whether 0.5 is rational or float).

    If "from __future__ import non_integer_division" is present in the
    module, until the IntType nb_divide is changed, the "/" operator
    is compiled to FUTURE_DIV.


Open Issues

    Should the // operator be renamed to "div"?


References

    [1] PEP 237, Unifying Long Integers and Integers, Zadka,
        http://python.sourceforge.net/peps/pep-0237.html

    [2] PEP 239, Adding a Rational Type to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0239.html

    [3] PEP 240, Adding a Rational Literal to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0240.html


Copyright

    This document has been placed in the public domain.


PEP: 239
Title: Adding a Rational Type to Python
Version: $Revision: 1.1 $
Author: pep@zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Python has no numeric type with the semantics of an unboundedly
    precise rational number.  This proposal explains the semantics of
    such a type, and suggests builtin functions and literals to
    support such a type.  This PEP suggests no literals for rational
    numbers; that is left for another PEP[1].


Rationale

    While sometimes slower and more memory intensive (in general,
    unboundedly so) rational arithmetic captures more closely the
    mathematical ideal of numbers, and tends to have behavior which is
    less surprising to newbies.  Though many Python implementations of
    rational numbers have been written, none of these exist in the
    core, or are documented in any way.  This has made them much less
    accessible to people who are less Python-savvy.


RationalType

    There will be a new numeric type added called RationalType.  Its
    unary operators will do the obvious thing.  Binary operators will
    coerce integers and long integers to rationals, and rationals to
    floats and complexes.

    The following attributes will be supported: .numerator and
    .denominator.  The language definition will not define these other
    then that:

        r.denominator * r == r.numerator

    In particular, no guarantees are made regarding the GCD or the
    sign of the denominator, even though in the proposed
    implementation, the GCD is always 1 and the denominator is always
    positive.

    The method r.trim(max_denominator) will return the closest
    rational s to r such that abs(s.denominator) <= max_denominator.


The rational() Builtin

    This function will have the signature rational(n, d=1).  n and d
    must both be integers, long integers or rationals.  A guarantee is
    made that

        rational(n, d) * d == n


References

    [1] PEP 240, Adding a Rational Literal to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0240.html


Copyright

    This document has been placed in the public domain.


PEP: 240
Title: Adding a Rational Literal to Python
Version: $Revision: 1.1 $
Author: pep@zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    A different PEP[1] suggests adding a builtin rational type to
    Python.  This PEP suggests changing the ddd.ddd float literal to a
    rational in Python, and modifying non-integer division to return
    it.


Rationale

    Rational numbers are useful, and are much harder to use without
    literals.  Making the "obvious" non-integer type one with more
    predictable semantics will surprise new programmers less then
    using floating point numbers.


Proposal

    Literals conforming to the regular expression '\d*.\d*' will be
    rational numbers.


Backwards Compatibility

    The only backwards compatible issue is the type of literals
    mentioned above.  The following migration is suggested:

    1. "from __future__ import rational_literals" will cause all such
       literals to be treated as rational numbers.

    2. Python 2.2 will have a warning, turned off by default, about
       such literals in the absence of a __future__ statement.  The
       warning message will contain information about the __future__
       statement, and indicate that to get floating point literals,
       they should be suffixed with "e0".

    3. Python 2.3 will have the warning turned on by default.  This
       warning will stay in place for 24 months, at which time the
       literals will be rationals and the warning will be removed.


References

    [1] PEP 239, Adding a Rational Type to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0239.html


Copyright

    This document has been placed in the public domain.


From nas@arctrix.com  Fri Mar 16 13:54:48 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 05:54:48 -0800
Subject: [Python-Dev] Simple generator implementation
In-Reply-To: <20010316033733.A9366@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 16, 2001 at 03:37:33AM -0800
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com> <20010316033733.A9366@glacier.fnational.com>
Message-ID: <20010316055448.A9591@glacier.fnational.com>

On Fri, Mar 16, 2001 at 03:37:33AM -0800, Neil Schemenauer wrote:
> ... it looks like it would be similer to put this code in the
> switch statement.

Um, no.  Bad idea.  Even if I could restructure the loop, try/finally
blocks mess everything up anyhow.

After searching through many megabytes of python-dev archives (grepmail
is my friend), I finally found the posts Tim was referring me to
(Subject: Generator details, Date: July 1999).  Guido and Tim already
had the answer for me.  Now:

    import sys

    def g():
        for n in range(10):
            suspend n, sys._getframe()
        return None, None

    n, frame = g()
    while frame:
        print n
        n, frame = frame.resume()

merrily prints 0 to 9 on stdout.  Whee!

  Neil


From aahz@pobox.com (Aahz Maruch)  Fri Mar 16 16:51:54 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Fri, 16 Mar 2001 08:51:54 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 15, 2001 10:32:47 PM
Message-ID: <200103161651.LAA18978@panix2.panix.com>

> 2. the author of the original patch can make that decision. That would
>    mean that Fredrik Lundh can still install his code as-is, but I'd
>    have to ask Fred's permission.
> 
> 3. the bug release coordinator can make that decision. That means that
>    Aahz must decide.

I'm in favor of some combination of 2) and 3).
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From martin@loewis.home.cs.tu-berlin.de  Fri Mar 16 17:46:47 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 16 Mar 2001 18:46:47 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <200103161651.LAA18978@panix2.panix.com> (aahz@panix.com)
References: <200103161651.LAA18978@panix2.panix.com>
Message-ID: <200103161746.f2GHklZ00972@mira.informatik.hu-berlin.de>

> I'm in favor of some combination of 2) and 3).

So let's try this out: Is it ok to include the new fields on range
objects in 2.0.1?

Regards,
Martin



From mal@lemburg.com  Fri Mar 16 18:09:17 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 16 Mar 2001 19:09:17 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com>
Message-ID: <3AB256CD.AE35DDEC@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Just out of curiosity: is there a usable decimal type implementation
> > somewhere on the net which we could beat on ?
> 
> ftp://ftp.python.org/pub/python/
>     contrib-09-Dec-1999/DataStructures/FixedPoint.py

So my intuition wasn't wrong -- you had all this already implemented
years ago ;-)
 
> It's more than two years old, and regularly mentioned on c.l.py.  From the
> tail end of the module docstring:
> 
> """
> The following Python operators and functions accept FixedPoints in the
> expected ways:
> 
>     binary + - * / % divmod
>         with auto-coercion of other types to FixedPoint.
>         + - % divmod  of FixedPoints are always exact.
>         * / of FixedPoints may lose information to rounding, in
>             which case the result is the infinitely precise answer
>             rounded to the result's precision.
>         divmod(x, y) returns (q, r) where q is a long equal to
>             floor(x/y) as if x/y were computed to infinite precision,
>             and r is a FixedPoint equal to x - q * y; no information
>             is lost.  Note that q has the sign of y, and abs(r) < abs(y).
>     unary -
>     == != < > <= >=  cmp
>     min  max
>     float  int  long    (int and long truncate)
>     abs
>     str  repr
>     hash
>     use as dict keys
>     use as boolean (e.g. "if some_FixedPoint:" -- true iff not zero)
> """

Very impressive ! The code really show just how difficult it is
to get this done right (w/r to some definition of that term ;).

BTW, is the implementation ANSI/IEEE standards conform ?

> > I for one would be very interested in having a decimal type
> > around (with fixed precision and scale),
> 
> FixedPoint is unbounded "to the left" of the point but maintains a fixed and
> user-settable number of (decimal) digits "after the point".  You can easily
> subclass it to complain about overflow, or whatever other damn-fool thing you
> think is needed <wink>.

I'll probably leave that part to the database interface ;-) Since they
check for possible overlfows anyway, I think your model fits the
database world best.

Note that I will have to interface to database using the string
representation, so I might get away with adding scale and precision
parameters to a (new) asString() method.

> > since databases rely on these a lot and I would like to assure
> > that passing database data through Python doesn't cause any data
> > loss due to rounding issues.
> 
> Define your ideal API and maybe I can implement it someday.  My employer also
> has use for this.  FixedPoint.py is better suited to computation than I/O,
> though, since it uses Python longs internally, and conversion between
> BCD-like formats and Python longs is expensive.

See above: if string representations can be computed fast,
than the internal storage format is secondary.
 
> > If there aren't any such implementations yet, the site that Tim
> > mentioned  looks like a good starting point for heading into this
> > direction... e.g. for mx.Decimal ;-)
> >
> >       http://www2.hursley.ibm.com/decimal/
> 
> FYI, note that Cowlishaw is moving away from REXX's "string of ASCII digits"
> representation toward a variant of BCD encoding.

Hmm, ideal would be an Open Source C lib which could be used as
backend for the implementation... haven't found such a beast yet
and the IBM BigDecimal Java class doesn't really look attractive as
basis for a C++ reimplementation.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From aahz@pobox.com (Aahz Maruch)  Fri Mar 16 18:29:29 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Fri, 16 Mar 2001 10:29:29 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 16, 2001 06:46:47 PM
Message-ID: <200103161829.NAA23971@panix6.panix.com>

> So let's try this out: Is it ok to include the new fields on range
> objects in 2.0.1?

My basic answer is "no".  This is complicated by the fact that the 2.22
patch on rangeobject.c *also* fixes the __contains__ bug [*].
Nevertheless, if I were the Patch Czar (and note the very, very
deliberate use of the subjunctive here), I'd probably tell whoever
wanted to fix the __contains__ bug to submit a new patch that does not
include the new xrange() attributes.


[*]  Whee!  I figured out how to browse CVS!  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From mal@lemburg.com  Fri Mar 16 20:29:59 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 16 Mar 2001 21:29:59 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com> <3AB256CD.AE35DDEC@lemburg.com>
Message-ID: <3AB277C7.28FE9B9B@lemburg.com>

Looking around some more on the web, I found that the GNU MP (GMP)
lib has switched from being GPLed to LGPLed, meaning that it
can actually be used by non-GPLed code as long as the source code
for the GMP remains publically accessible.

Some background which probably motivated this move can be found 
here:

  http://www.ptf.com/ptf/products/UNIX/current/0264.0.html
  http://www-inst.eecs.berkeley.edu/~scheme/source/stk/Mp/fgmp-1.0b5/notes

Since the GMP offers arbitrary precision numbers and also has
a rational number implementation I wonder if we could use it
in Python to support fractions and arbitrary precision
floating points ?!

Here's pointer to what the GNU MP has to offer:

  http://www.math.columbia.edu/online/gmp.html

The existing mpz module only supports MP integers, but support
for the other two types should only be a matter of hard work
;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From gward@python.net  Fri Mar 16 22:34:23 2001
From: gward@python.net (Greg Ward)
Date: Fri, 16 Mar 2001 17:34:23 -0500
Subject: [Python-Dev] Media spotting
Message-ID: <20010316173423.A20849@cthulhu.gerg.ca>

No doubt the Vancouver crowd has already seen this by now, but the rest
of you probably haven't.  From *The Globe and Mail*, March 15 2001, page
T5:

"""
Targeting people who work with computers but aren't programmers -- such
as data analysts, software testers, and Web masters -- ActivePerl comes
with telephone support and developer tools such as an "editor."  This
feature highlights mistakes made in a user's work -- similar to the
squiggly line that appears under spelling mistakes in Word documents.
"""

A-ha! so *that's* what editors are for!

        Greg

PS. article online at

  http://news.globetechnology.com/servlet/GAMArticleHTMLTemplate?tf=globetechnology/TGAM/NewsFullStory.html&cf=globetechnology/tech-config-neutral&slug=TWCOME&date=20010315

Apart from the above paragraph, it's pretty low on howlers.

-- 
Greg Ward - programmer-at-big                           gward@python.net
http://starship.python.net/~gward/
If you and a friend are being chased by a lion, it is not necessary to
outrun the lion.  It is only necessary to outrun your friend.


From sanner@scripps.edu  Sat Mar 17 01:43:23 2001
From: sanner@scripps.edu (Michel Sanner)
Date: Fri, 16 Mar 2001 17:43:23 -0800
Subject: [Python-Dev] import question
Message-ID: <1010316174323.ZM10134@noah.scripps.edu>

Hi, I didn't get any response on help-python.org so I figured I try these lists


if I have the follwoing packages hierarchy

A/
	__init__.py
        B/
		__init__.py
		C.py


I can use:

>>> from A.B import C

but if I use:

>>> import A
>>> print A
<module 'A' from 'A/__init__.pyc'>
>>> from A import B
print B
<module 'A.B' from 'A/B/__init__.py'>
>>> from B import C
Traceback (innermost last):
  File "<stdin>", line 1, in ?
ImportError: No module named B

in order to get this to work I have to

>>> import sys
>>> sys.modules['B'] = B

Is that expected ?
In the documentation I read:

"from" module "import" identifier

so I expected "from B import C" to be legal since B is a module

I tried this with Python 1.5.2 and 2.0 on an sgi under IRIX6.5

Thanks for any help

-Michel

-- 

-----------------------------------------------------------------------

>>>>>>>>>> AREA CODE CHANGE <<<<<<<<< we are now 858 !!!!!!!

Michel F. Sanner Ph.D.                   The Scripps Research Institute
Assistant Professor			Department of Molecular Biology
					  10550 North Torrey Pines Road
Tel. (858) 784-2341				     La Jolla, CA 92037
Fax. (858) 784-2860
sanner@scripps.edu                        http://www.scripps.edu/sanner
-----------------------------------------------------------------------



From guido@digicool.com  Sat Mar 17 02:13:14 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 16 Mar 2001 21:13:14 -0500
Subject: [Python-Dev] Re: [Import-sig] import question
In-Reply-To: Your message of "Fri, 16 Mar 2001 17:43:23 PST."
 <1010316174323.ZM10134@noah.scripps.edu>
References: <1010316174323.ZM10134@noah.scripps.edu>
Message-ID: <200103170213.VAA13856@cj20424-a.reston1.va.home.com>

> if I have the follwoing packages hierarchy
> 
> A/
> 	__init__.py
>         B/
> 		__init__.py
> 		C.py
> 
> 
> I can use:
> 
> >>> from A.B import C
> 
> but if I use:
> 
> >>> import A
> >>> print A
> <module 'A' from 'A/__init__.pyc'>
> >>> from A import B
> print B
> <module 'A.B' from 'A/B/__init__.py'>
> >>> from B import C
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
> ImportError: No module named B
> 
> in order to get this to work I have to
> 
> >>> import sys
> >>> sys.modules['B'] = B
> 
> Is that expected ?
> In the documentation I read:
> 
> "from" module "import" identifier
> 
> so I expected "from B import C" to be legal since B is a module
> 
> I tried this with Python 1.5.2 and 2.0 on an sgi under IRIX6.5
> 
> Thanks for any help
> 
> -Michel

In "from X import Y", X is not a reference to a name in your
namespace, it is a module name.  The right thing is indeed to write
"from A.B import C".  There's no way to shorten this; what you did
(assigning sys.modules['B'] = B) is asking for trouble.

Sorry!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From palisade@SirDrinkalot.rm-f.net  Sat Mar 17 02:37:54 2001
From: palisade@SirDrinkalot.rm-f.net (Palisade)
Date: Fri, 16 Mar 2001 18:37:54 -0800
Subject: [Python-Dev] PEP dircache.py core modification
Message-ID: <20010316183754.A7151@SirDrinkalot.rm-f.net>

--h31gzZEtNLTqOjlF
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

This is my first exposure to the Python language, and I have found many things
to my liking. I have also noticed some quirks which I regard as assumption
flaws on part of the interpreter. The one I am interested in at the moment is
the assumption that we should leave the . and .. directory entries out of the
directory listing returned by os.listdir().

I have read the PEP specification and have thereby prepared a PEP for your
perusal. I hope you agree with me that this is both a philosophical issue
based in tradition as well as a duplication of effort problem that can be
readily solved with regards to backwards compatibility.

Thank you.

I have attached the PEP to this message.

Sincerely,
Nelson Rush

"This most beautiful system [The Universe] could only proceed from the
dominion of an intelligent and powerful Being."
-- Sir Isaac Newton

--h31gzZEtNLTqOjlF
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=pep-ldir

PEP: 
Title: os.listdir Full Directory Listing
Version: 
Author: palisade@users.sourceforge.net (Nelson Rush)
Status: 
Type: 
Created: 16/3/2001
Post-History: 

Introduction

    This PEP explains the need for two missing elements in the list returned
    by the os.listdir function.



Proposal

    It is obvious that having os.listdir() return a list with . and .. is
    going to cause many existing programs to function incorrectly. One
    solution to this problem could be to create a new function os.listdirall()
    or os.ldir() which returns every file and directory including the . and ..
    directory entries. Another solution could be to overload os.listdir's
    parameters, but that would unnecessarily complicate things.



Key Differences with the Existing Protocol

    The existing os.listdir() leaves out both the . and .. directory entries
    which are a part of the directory listing as is every other file.



Examples

    import os
    dir = os.ldir('/')
    for i in dir:
        print i

    The output would become:

    .
    ..
    lost+found
    tmp
    usr
    var
    WinNT
    dev
    bin
    home
    mnt
    sbin
    boot
    root
    man
    lib
    cdrom
    proc
    etc
    info
    pub
    .bash_history
    service



Dissenting Opinion

    During a discussion on Efnet #python, an objection was made to the
    usefulness of this implementation. Namely, that it is little extra
    effort to just insert these two directory entries into the list.

    Example:

    os.listdir() + ['.','..']

    An argument can be made however that the inclusion of both . and ..
    meet the standard way of listing files within directories. It is on
    basis of this common method between languages of listing directories
    that this tradition should be maintained.

    It was also suggested that not having . and .. returned in the list
    by default is required to be able to perform such actions as `cp * dest`.

    However, programs like `ls` and `cp` list and copy files excluding
    any directory that begins with a period. Therefore there is no need
    to clip . and .. from the directory list by default. Since anything
    beginning with a period is considered to be hidden.



Reference Implementation

    The reference implementation of the new dircache.py core ldir function
    extends listdir's functionality as proposed.

    http://palisade.rm-f.net/dircache.py



Copyright

    This document has been placed in the Public Domain.

--h31gzZEtNLTqOjlF--


From guido@digicool.com  Sat Mar 17 02:42:29 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 16 Mar 2001 21:42:29 -0500
Subject: [Python-Dev] PEP dircache.py core modification
In-Reply-To: Your message of "Fri, 16 Mar 2001 18:37:54 PST."
 <20010316183754.A7151@SirDrinkalot.rm-f.net>
References: <20010316183754.A7151@SirDrinkalot.rm-f.net>
Message-ID: <200103170242.VAA14061@cj20424-a.reston1.va.home.com>

Sorry, I see no merit in your proposal [to add "." and ".." back into
the output of os.listdir()].  You are overlooking the fact that the os
module in Python is intended to be a *portable* interface to operating
system functionality.  The presence of "." and ".." in a directory
listing is not supported on all platforms, e.g. not on Macintosh.

Also, my experience with using os.listdir() way back, when it *did*
return "." and "..", was that *every* program using os.listdir() had
to be careful to filter out "." and "..".  It simply wasn't useful to
include these.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From paulp@ActiveState.com  Sat Mar 17 02:56:27 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Fri, 16 Mar 2001 18:56:27 -0800
Subject: [Python-Dev] Sourceforge FAQ
Message-ID: <3AB2D25B.FA724414@prescod.net>

Who maintains this document?

http://python.sourceforge.net/sf-faq.html#p1

I have some suggestions.

 1. Put an email address for comments like this in it.
 2. In the section on generating diff's, put in the right options for a
context diff
 3. My SF FAQ isn't there: how do I generate a diff that has a new file
as part of it?

 Paul Prescod


From nas@arctrix.com  Sat Mar 17 02:59:22 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 18:59:22 -0800
Subject: [Python-Dev] Simple generator implementation
Message-ID: <20010316185922.A11046@glacier.fnational.com>

Before I jump into the black whole of coroutines and
continuations, here's a patch to remember me by:

    http://arctrix.com/nas/python/generator1.diff

Bye bye.

  Neil


From tim.one@home.com  Sat Mar 17 05:40:49 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 17 Mar 2001 00:40:49 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <3AB2D25B.FA724414@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>

[Paul Prescod]
> Who maintains this document?
>
> http://python.sourceforge.net/sf-faq.html#p1

Who maintains ceval.c?  Same deal:  anyone with time, commit access, and
something they want to change.

> I have some suggestions.
>
>  1. Put an email address for comments like this in it.

If you're volunteering, happy to put in *your* email address <wink>.

>  2. In the section on generating diff's, put in the right options for a
> context diff

The CVS source is

    python/nondist/sf-html/sf-faq.html

You also need to upload (scp) it to

    shell.sourceforge.net:/home/groups/python/htdocs/

after you've committed your changes.

>  3. My SF FAQ isn't there: how do I generate a diff that has a new file
> as part of it?

"diff -c" <wink -- but I couldn't make much sense of this question>.



From tim.one@home.com  Sat Mar 17 09:29:24 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 17 Mar 2001 04:29:24 -0500
Subject: [Python-Dev] Re: WYSIWYG decimal fractions)
In-Reply-To: <3AB256CD.AE35DDEC@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEHJGAA.tim.one@home.com>

[M.-A. Lemburg, on FixedPoint.py]
> ...
> Very impressive ! The code really show just how difficult it is
> to get this done right (w/r to some definition of that term ;).

Yes and no.  Here's the "no" part:  I can do code like this in my sleep, due
to decades of experience.  So code like that isn't difficult at all for the
right person (yes, it *is* difficult if you don't already have the background
for it!  it's learnable, though <wink>).

Here's the "yes" part:  I have no experience with database or commercial
non-scientific applications, while people who do seem to have no clue about
how to *specify* what they need.  When I was writing FixedPoint.py, I asked
and asked what kind of rounding rules people needed, and what kind of
precision-propagation rules.  I got a grand total of 0 *useful* replies.  In
that sense it seems a lot like getting Python threads to work under HP-UX:
lots of people can complain, but no two HP-UX users agree on what's needed to
fix it.

In the end (for me), it *appeared* that there simply weren't any explicable
rules:  that among users of 10 different commerical apps, there were 20
different undocumented and proprietary legacy schemes for doing decimal fixed
and floats.  I'm certain I could implement any of them via trivial variations
of the FixedPoint.py code, but I couldn't get a handle on what exactly they
were.

> BTW, is the implementation ANSI/IEEE standards conform ?

Sure, the source code strictly conforms to the ANSI character set <wink>.

Which standards specifically do you have in mind?  The decimal portions of
the COBOL and REXX standards are concerned with how decimal arithmetic
interacts with language-specific features, while the 854 standard is
concerned with decimal *floating* point (which the astute reader may have
guessed FixedPoint.py does not address).  So it doesn't conform to any of
those.  Rounding, when needed, is done in conformance with the *default*
"when rounding is needed, round via nearest-or-even as if the intermediate
result were known to infinite precision" 854 rules.  But I doubt that many
commercial implementations of decimal arithmetic use that rule.

My much fancier Rational package (which I never got around to making
available) supports 9 rounding modes directly, and can be user-extended to
any number of others.  I doubt any of the builtin ones are in much use either
(for example, the builtin "round away from 0" and "round to nearest, or
towards minus infinity in case of tie" aren't even useful to me <wink>).

Today I like Cowlishaw's "Standard Decimal Arithmetic Specification" at

    http://www2.hursley.ibm.com/decimal/decspec.html

but have no idea how close that is to commerical practice (OTOH, it's
compatible w/ REXX, and lots of database-heads love REXX).

> ...
> Note that I will have to interface to database using the string
> representation, so I might get away with adding scale and precision
> parameters to a (new) asString() method.

As some of the module comments hint, FixedPoint.py started life with more
string gimmicks.  I ripped them out, though, for the same reason we *should*
drop thread support on HP-UX <0.6 wink>:  no two emails I got agreed on what
was needed, and the requests were mutually incompatible.  So I left a clean
base class for people to subclass as desired.

On 23 Dec 1999, Jim Fulton again raised "Fixed-decimal types" on Python-Dev.
I was on vacation & out of touch at the time.  Guido has surely forgotten
that he replied

    I like the idea of using the dd.ddL notation for this.

and will deny it if he reads this <wink>.

There's a long discussion after that -- look it up!  I see that I got around
to replying on 30 Dec 1999-- a repetition of this thread, really! --and
posted (Python) kernels for more flexible precision-control and rounding
policies than FixedPoint.py provided.

As is customary in the Python world, the first post that presented actual
code killed the discussion <wink/sigh> -- 'twas never mentioned again.

>> FixedPoint.py is better suited to computation than I/O, though,
>> since it uses Python longs internally, and conversion between
>> BCD-like formats and Python longs is expensive.

> See above: if string representations can be computed fast,

They cannot.  That was the point.  String representations *are* "BCD-like" to
me, in that they separate out each decimal digit.  To suck the individual
decimal digits out of a Python long requires a division by 10 for each digit.
Since people in COBOL routinely work with 32-digit decimal numbers, that's 32
*multi-precision* divisions by 10.  S-l-o-w.  You can play tricks like
dividing by 1000 instead, then use table lookup to get three digits at a
crack, but the overall process remains quadratic-time in the number of
digits.

Converting from a string of decimal digits to a Python long is also quadratic
time, so using longs as an internal representation is expensive in both
directions.

It is by far the cheapest way to do *computations*, though.  So I meant what
I said in all respects.

> ...
> Hmm, ideal would be an Open Source C lib which could be used as
> backend for the implementation... haven't found such a beast yet
> and the IBM BigDecimal Java class doesn't really look attractive as
> basis for a C++ reimplementation.

It's easy to find GPL'ed code for decimal arithmetic (for example, pick up
the Regina REXX implementation linked to from the Cowlishaw page).  For that
matter, you could just clone Python's longint code and fiddle the base to a
power of 10 (mutatis mutandis), and stick an exponent ("scale factor") on it.
This is harder than it sounds, but quite doable.

then-again-if-god-had-wanted-us-to-use-base-10-he-wouldn't-have-
    given-us-2-fingers-ly y'rs  - tim



From aahz@pobox.com (Aahz Maruch)  Sat Mar 17 16:35:17 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Sat, 17 Mar 2001 08:35:17 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315233737.B29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:37:37 PM
Message-ID: <200103171635.LAA12321@panix2.panix.com>

>>     1. There must be zero syntax changes.  All .pyc and .pyo files
>>        must work (no regeneration needed) with all patch releases
>>        forked off from a feature release.
> 
> Hmm... Would making 'continue' work inside 'try' count as a bugfix or as a
> feature ? It's technically not a syntax change, but practically it is.
> (Invalid syntax suddenly becomes valid.) 

That's a good question.  The modifying sentence is the critical part:
would there be any change to the bytecodes generated?  Even if not, I'd
be inclined to reject it.

>>   Bug Fix Releases
>> 
>>     Bug fix releases are a subset of all patch releases; it is
>>     prohibited to add any features to the core in a bug fix release.
>>     A patch release that is not a bug fix release may contain minor
>>     feature enhancements, subject to the Prohibitions section.
> 
> I'm not for this 'bugfix release', 'patch release' difference. The
> numbering/naming convention is too confusing, not clear enough, and I don't
> see the added benifit of adding limited features. If people want features,
> they should go and get a feature release. The most important bit in patch
> ('bugfix') releases is not to add more bugs, and rewriting parts of code to
> fix a bug is something that is quite likely to insert more bugs. Sure, as
> the patch coder, you are probably certain there are no bugs -- but so was
> whoever added the bug in the first place :)

As I said earlier, the primary motivation for going this route was the
ambiguous issue of case-sensitive imports.  (Similar issues are likely
to crop up.)

>>     The Patch Czar decides when there are a sufficient number of
>>     patches to warrant a release.  The release gets packaged up,
>>     including a Windows installer, and made public as a beta release.
>>     If any new bugs are found, they must be fixed and a new beta
>>     release publicized.  Once a beta cycle completes with no new bugs
>>     found, the package is sent to PythonLabs for certification and
>>     publication on python.org.
> 
>>     Each beta cycle must last a minimum of one month.
> 
> This process probably needs a firm smack with reality, but that would have
> to wait until it meets some, first :) Deciding when to do a bugfix release
> is very tricky: some bugs warrant a quick release, but waiting to assemble
> more is generally a good idea. The whole beta cycle and windows
> installer/RPM/etc process is also a bottleneck. Will Tim do the Windows
> Installer (or whoever does it for the regular releases) ? If he's building
> the installer anyway, why can't he 'bless' the release right away ?

Remember that all bugfixes are available as patches off of SourceForge.
Anyone with a truly critical need is free to download the patch and
recompile.  Overall, I see patch releases as coinciding with feature
releases so that people can concentrate on doing the same kind of work
at the same time.

> I'm also not sure if a beta cycle in a bugfix release is really necessary,
> especially a month long one. Given that we have a feature release planned
> each 6 months, and a feature release has generally 2 alphas and 2 betas,
> plus sometimes a release candidate, plus the release itself, and a bugfix
> release would have one or two betas too, and say that we do two betas in
> those six months, that would make 10+ 'releases' of various form in those 6
> months. Ain't no-one[*] going to check them out for a decent spin, they'll
> just wait for the final version.

That's why I'm making the beta cycle artificially long (I'd even vote
for a two-month minimum).  It slows the release pace and -- given the
usually high quality of Python betas -- it encourages people to try them
out.  I believe that we *do* need patch betas for system testing.

>>     Should the first patch release following any feature release be
>>     required to be a bug fix release?  (Aahz proposes "yes".)
>>     Is it allowed to do multiple forks (e.g. is it permitted to have
>>     both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)
>>     Does it makes sense for a bug fix release to follow a patch
>>     release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)
> 
> More reasons not to have separate featurebugfixreleasethingies and
> bugfix-releases :)

Fair enough.

>>     What is the equivalent of python-dev for people who are
>>     responsible for maintaining Python?  (Aahz proposes either
>>     python-patch or python-maint, hosted at either python.org or
>>     xs4all.net.)
> 
> It would probably never be hosted at .xs4all.net. We use the .net address
> for network related stuff, and as a nice Personality Enhancer (read: IRC
> dick extender) for employees. We'd be happy to host stuff, but I would
> actually prefer to have it under a python.org or some other python-related
> domainname. That forestalls python questions going to admin@xs4all.net :) A
> small logo somewhere on the main page would be nice, but stuff like that
> should be discussed if it's ever an option, not just because you like the
> name 'XS4ALL' :-)

Okay, I didn't mean to imply that it would literally be @xs4all.net.

>>     Does SourceForge make it possible to maintain both separate and
>>     combined bug lists for multiple forks?  If not, how do we mark
>>     bugs fixed in different forks?  (Simplest is to simply generate a
>>     new bug for each fork that it gets fixed in, referring back to the
>>     main bug number for details.)
> 
> We could make it a separate SF project, just for the sake of keeping
> bugreports/fixes in the maintenance branch and the head branch apart. The
> main Python project already has an unwieldy number of open bugreports and
> patches.

That was one of my thoughts, but I'm not entitled to an opinion (I don't
have an informed opinion ;-).

> I'm also for starting the maintenance branch right after the real release,
> and start adding bugfixes to it right away, as soon as they show up. Keeping
> up to date on bufixes to the head branch is then as 'simple' as watching
> python-checkins. (Up until the fact a whole subsystem gets rewritten, that
> is :) People should still be able to submit bugfixes for the maintenance
> branch specifically.

That is *precisely* why my original proposal suggested that only the N-1
release get patch attention, to conserve effort.  It is also why I
suggested that patch releases get hooked to feature releases.

> And I'm still willing to be the patch monkey, though I don't think I'm the
> only or the best candidate. I'll happily contribute regardless of who gets
> the blame :)

If you're willing to do the work, I'd love it if you were the official
Patch Czar.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From ping@lfw.org  Sat Mar 17 22:00:22 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Sat, 17 Mar 2001 14:00:22 -0800 (PST)
Subject: [Python-Dev] Scoping (corner cases)
Message-ID: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>

Hey there.

What's going on here?

    Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> x = 1
    >>> class Foo:
    ...     print x
    ... 
    1
    >>> class Foo:  
    ...     print x
    ...     x = 1
    ... 
    1
    >>> class Foo:
    ...     print x
    ...     x = 2
    ...     print x
    ... 
    1
    2
    >>> x
    1

Can we come up with a consistent story on class scopes for 2.1?



-- ?!ng



From guido@digicool.com  Sat Mar 17 22:19:52 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 17 Mar 2001 17:19:52 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: Your message of "Sat, 17 Mar 2001 14:00:22 PST."
 <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
References: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
Message-ID: <200103172219.RAA16377@cj20424-a.reston1.va.home.com>

> What's going on here?
> 
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 1
>     >>> class Foo:
>     ...     print x
>     ... 
>     1
>     >>> class Foo:  
>     ...     print x
>     ...     x = 1
>     ... 
>     1
>     >>> class Foo:
>     ...     print x
>     ...     x = 2
>     ...     print x
>     ... 
>     1
>     2
>     >>> x
>     1
> 
> Can we come up with a consistent story on class scopes for 2.1?

They are consistent with all past versions of Python.

Class scopes don't work like function scopes -- they use LOAD_NAME and
STORE_NAME.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jeremy@alum.mit.edu  Sat Mar 17 02:16:23 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Fri, 16 Mar 2001 21:16:23 -0500 (EST)
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <200103172219.RAA16377@cj20424-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
 <200103172219.RAA16377@cj20424-a.reston1.va.home.com>
Message-ID: <15026.51447.862936.753570@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

  >> Can we come up with a consistent story on class scopes for 2.1?

  GvR> They are consistent with all past versions of Python.

Phew!

  GvR> Class scopes don't work like function scopes -- they use
  GvR> LOAD_NAME and STORE_NAME.

Class scopes are also different because a block's free variables are
not resolved in enclosing class scopes.  We'll need to make sure the
doc says that class scopes and function scopes are different.

Jeremy



From tim.one@home.com  Sat Mar 17 22:31:08 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 17 Mar 2001 17:31:08 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFLJGAA.tim.one@home.com>

[Ka-Ping Yee]
> What's going on here?
>
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43)
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 1
>     >>> class Foo:
>     ...     print x
>     ...
>     1
>     >>> class Foo:
>     ...     print x

IMO, this one should have yielded an UnboundLocalError at runtime.  "A class
definition is a code block", and has a local namespace that's supposed to
follow the namespace rules; since x is bound to on the next line, x should be
a local name within the class body.

>     ...     x = 1
>     ...
>     1
>     >>> class Foo:
>     ...     print x

Ditto.

>     ...     x = 2
>     ...     print x
>     ...
>     1
>     2
>     >>> x
>     1
>
> Can we come up with a consistent story on class scopes for 2.1?

The story is consistent but the implementation is flawed <wink>.  Please open
a bug report; I wouldn't consider it high priority, though, as this is
unusual stuff to do in a class definition.



From tim.one@home.com  Sat Mar 17 22:33:07 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 17 Mar 2001 17:33:07 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <15026.51447.862936.753570@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEFMJGAA.tim.one@home.com>

[Guido]
> Class scopes don't work like function scopes -- they use
> LOAD_NAME and STORE_NAME.

[Jeremy]
> Class scopes are also different because a block's free variables are
> not resolved in enclosing class scopes.  We'll need to make sure the
> doc says that class scopes and function scopes are different.

Yup.  Since I'll never want to do stuff like this, I don't really care a heck
of a lot what it does; but it should be documented!

What does Jython do with these?



From thomas@xs4all.net  Sat Mar 17 23:01:09 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:01:09 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103171635.LAA12321@panix2.panix.com>; from aahz@panix.com on Sat, Mar 17, 2001 at 08:35:17AM -0800
References: <20010315233737.B29286@xs4all.nl> <200103171635.LAA12321@panix2.panix.com>
Message-ID: <20010318000109.M27808@xs4all.nl>

On Sat, Mar 17, 2001 at 08:35:17AM -0800, aahz@panix.com wrote:

> Remember that all bugfixes are available as patches off of SourceForge.

I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
true, it's very not true. A lot of the patches applied are either never
submitted to SF (because it's the 'obvious fix' by one of the commiters) or
are modified to some extent from thh SF patch proposed. (Often
formatting/code style, fairly frequently symbol renaming, and not too
infrequently changes in the logic for various reasons.)

> > ... that would make 10+ 'releases' of various form in those 6 months.
> > Ain't no-one[*] going to check them out for a decent spin, they'll just
> > wait for the final version.

> That's why I'm making the beta cycle artificially long (I'd even vote
> for a two-month minimum).  It slows the release pace and -- given the
> usually high quality of Python betas -- it encourages people to try them
> out.  I believe that we *do* need patch betas for system testing.

But having a patch release once every 6 months negates the whole purpose of
patch releases :) If you are in need of a bugfix, you don't want to wait
three months before a bugfix release beta with your specific bug fixed is
going to be released, and you don't want to wait two months more for the
release to become final. (Note: we're talking people who don't want to use
the next feature release beta or current CVS version, so they aren't likely
to try a bugfix release beta either.) Bugfix releases should come often-ish,
compared to feature releases. But maybe we can get the BDFL to slow the pace
of feature releases instead ? Is the 6-month speedway really appropriate if
we have a separate bugfix release track ?

> > I'm also for starting the maintenance branch right after the real release,
> > and start adding bugfixes to it right away, as soon as they show up. Keeping
> > up to date on bufixes to the head branch is then as 'simple' as watching
> > python-checkins. (Up until the fact a whole subsystem gets rewritten, that
> > is :) People should still be able to submit bugfixes for the maintenance
> > branch specifically.

> That is *precisely* why my original proposal suggested that only the N-1
> release get patch attention, to conserve effort.  It is also why I
> suggested that patch releases get hooked to feature releases.

There is no technical reason to do just N-1. You can branch of as often as
you want (in fact, branches never disappear, so if we were building 3.5 ten
years from now (and we would still be using CVS <wink GregS>) we could apply
a specific patch to the 2.0 maintenance branch and release 2.0.128, if need
be.)

Keeping too many maintenance branches active does bring the administrative
nightmare with it, of course. We can start with just N-1 and see where it
goes from there. If significant numbers of people are still using 2.0.5 when
2.2 comes out, we might have to reconsider.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Sat Mar 17 23:26:45 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:26:45 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>; from tim.one@home.com on Sat, Mar 17, 2001 at 12:40:49AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
Message-ID: <20010318002645.H29286@xs4all.nl>

On Sat, Mar 17, 2001 at 12:40:49AM -0500, Tim Peters wrote:

> >  3. My SF FAQ isn't there: how do I generate a diff that has a new file
> > as part of it?

> "diff -c" <wink -- but I couldn't make much sense of this question>.

What Paul means is that he's added a new file to his tree, and wants to send
in a patch that includes that file. Unfortunately, CVS can't do that :P You
have two choices:

- 'cvs add' the file, but don't commit. This is kinda lame since it requires
 commit access, and it creates the administrativia for the file already. I
 *think* that if you do this, only you can actually add the file (after the
 patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
 show the file (as all +'es, obviously) even though it will complain to
 stderr about its ignorance about that specific file.

- Don't use cvs diff. Use real diff instead. Something like this:

  mv your tree asside, (can just mv your 'src' dir to 'src.mypatch' or such)
  cvs update -d,
  make distclean in your old tree,
  diff -crN --exclude=CVS src src.mypatch > mypatch.diff

 Scan your diff for bogus files, delete the sections by hand or if there are
 too many of them, add more --exclude options to your diff. I usually use
 '--exclude=".#*"' as well, and I forget what else.  By the way, for those
 who don't know it yet, an easy way to scan the patch is using 'diffstat'.

Note that to *apply* a patch like that (one with a new file), you need a
reasonably up-to-date GNU 'patch'.

I haven't added all this to the SF FAQ because, uhm, well, I consider them
lame hacks. I've long suspected there was a better way to do this, but I
haven't found it or even heard rumours about it yet. We should probably add
it to the FAQ anyway (just the 2nd option, though.)

Of course, there is a third way: write your own diff >;> It's not that hard,
really :) 

diff -crN ....
*** <name of file>      Thu Jan  1 01:00:00 1970
--- <name of file>      <timestamp of file>
***************
*** 0 ****
--- 1,<number of lines in file> ----
<file, each line prefixed by '+ '>

You can just insert this chunk (with an Index: line and some fake RCS cruft,
if you want -- patch doesn't use it anyway, IIRC) somewhere in your patch
file.

A couple of weeks back, while on a 10-hour nighttime spree to fix all our
SSH clients and daemons to openssh 2.5 where possible and a handpatched ssh1
where necessary, I found myself unconciously writing diffs instead of
editing source and re-diffing the files, because I apparently thought it was
faster (it was, too.) Scarily enough, I got all the linenumbers and such
correct, and patch didn't whine about them at all ;)

I haven't added all this to the SF FAQ because, uhm, well, I consider them
lame hacks. I've long suspected there was a better way to do this, but I
haven't found it or even heard rumours about it yet.

Sign-o-the-nerdy-times-I-guess-ly y'rs ;)
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim.one@home.com  Sat Mar 17 23:49:22 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 17 Mar 2001 18:49:22 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <20010318002645.H29286@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>

[Pual]
>>>  3. My SF FAQ isn't there: how do I generate a diff that has a new file
>>>     as part of it?

[TIm]
>> "diff -c" <wink -- but I couldn't make much sense of this question>.

[Thomas]
> What Paul means is that he's added a new file to his tree, and
> wants to send in a patch that includes that file.

Ya, I picked that up after Martin explained it.  Best I could make out was
that Paul had written his own SF FAQ document and wanted to know how to
generate a diff that incorporated it as "a new file" into the existing SF
FAQ.  But then I've been severely sleep-deprived most of the last week
<0.zzzz wink>.

> ...
> - Don't use cvs diff. Use real diff instead. Something like this:
>
>   mv your tree asside, (can just mv your 'src' dir to
>                         'src.mypatch' or such)
>   cvs update -d,
>   make distclean in your old tree,
>   diff -crN --exclude=CVS src src.mypatch > mypatch.diff
>
> Scan your diff for bogus files, delete the sections by hand or if
> there are too many of them, add more --exclude options to your diff. I
> usually use '--exclude=".#*"' as well, and I forget what else.  By the
> away, for those who don't know it yet, an easy way to scan the patch is
> using 'diffstat'.
>
> Note that to *apply* a patch like that (one with a new file), you need a
> reasonably up-to-date GNU 'patch'.
> ...

I'm always amused that Unix users never allow the limitations of their tools
to convince them to do something obvious instead.

on-windows-you-just-tell-tim-to-change-the-installer<wink>-ly y'rs  - tim



From thomas@xs4all.net  Sat Mar 17 23:58:40 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:58:40 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>; from tim.one@home.com on Sat, Mar 17, 2001 at 06:49:22PM -0500
References: <20010318002645.H29286@xs4all.nl> <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>
Message-ID: <20010318005840.K29286@xs4all.nl>

On Sat, Mar 17, 2001 at 06:49:22PM -0500, Tim Peters wrote:

> I'm always amused that Unix users never allow the limitations of their tools
> to convince them to do something obvious instead.

What would be the obvious thing ? Just check it in ? :-)
Note that CVS's dinkytoy attitude did prompt several people to do the
obvious thing: they started to rewrite it from scratch. Greg Stein jumped in
with those people to help them out on the touch infrastructure decisions,
which is why one of my *other* posts that mentioned CVS did a <wink GregS>
;)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From tim.one@home.com  Sun Mar 18 00:17:06 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 17 Mar 2001 19:17:06 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <20010318005840.K29286@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFOJGAA.tim.one@home.com>

[Thomas Wouters]
> What would be the obvious thing ? Just check it in ? :-)

No:  as my signoff line implied, switch to Windows and tell Tim to deal with
it.  Works for everyone except me <wink>!  I was just tweaking you.  For a
patch on SF, it should be enough to just attach the new files and leave a
comment saying where they belong.

> Note that CVS's dinkytoy attitude did prompt several people to do the
> obvious thing: they started to rewrite it from scratch. Greg Stein
> jumped in with those people to help them out on the touch infrastructure
> decisions, which is why one of my *other* posts that mentioned CVS did a
> <wink GregS>
> ;)

Yup, *that* I picked up.

BTW, I'm always amused that Unix users never allow the lateness of their
rewrite-from-scratch boondoggles to convince them to do something obvious
instead.

wondering-how-many-times-someone-will-bite-ly y'rs  - tim



From pedroni@inf.ethz.ch  Sun Mar 18 00:27:48 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 01:27:48 +0100
Subject: [Python-Dev] Scoping (corner cases)
References: <LNBBLJKPBEHFEDALKOLCAEFMJGAA.tim.one@home.com>
Message-ID: <3AB40104.8020109@inf.ethz.ch>

Hi.

Tim Peters wrote:

> [Guido]
> 
>> Class scopes don't work like function scopes -- they use
>> LOAD_NAME and STORE_NAME.
> 
> 
> [Jeremy]
> 
>> Class scopes are also different because a block's free variables are
>> not resolved in enclosing class scopes.  We'll need to make sure the
>> doc says that class scopes and function scopes are different.
> 
> 
> Yup.  Since I'll never want to do stuff like this, I don't really care a heck
> of a lot what it does; but it should be documented!
> 
> What does Jython do with these?

The  jython codebase (prior and post to my nested scopes changes) does 
exactly the same as python, in fact something
equivalent to LOAD_NAME and SET_NAME is used in class scopes.

regards



From pedroni@inf.ethz.ch  Sun Mar 18 01:17:47 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 02:17:47 +0100
Subject: [Python-Dev] Icon-style generators vs. jython
References: <LNBBLJKPBEHFEDALKOLCAEFLJGAA.tim.one@home.com>
Message-ID: <3AB40CBB.2050308@inf.ethz.ch>

>   

This is very prelimary, no time to read details, try things or look at 
Neil's impl.

As far as I have understood Icon generators are function with normal 
entry, exit points and multiple suspension points:
at a suspension point an eventual impl should save the cur frame status  
somehow inside the function obj together with the information
where the function should restart and then normally return a value or 
nothing.

In jython we have frames, and function are encapsulated in objects so 
the whole should be doable (with some effort), I expect that we can deal
with the multi-entry-points with a jvm switch bytecode. Entry code or 
function dispatch code should handle restarting (we already have
a code the manages frame creation and function dispatch on every python 
call).

There could be a problem with jythonc (the jython-to-java compiler) 
because it produces java source code and not directly bytecode,
because at source level AFAIK in java one cannot intermangle switches 
and other ctrl structures, so how to deal with multiple entry points.
(no goto ;)). We would have to rewrite it to produce bytecode directly.

What is expected behaviour wrt threads, should generators be reentrant 
(that mean that frame and restart info should be saved on a thread basis)
or are they somehow global active objects so if thread 1 call a 
generator that suspend then thread 2 will reenter it after the 
suspension point?

Freezing more than a frame is not directly possible in jython, frames 
are pushed and popped on java stack and function calls pass through
java calling mechanism. (I imagine you need a separate thread for doing 
that).

regards.



From tim.one@home.com  Sun Mar 18 01:36:40 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 17 Mar 2001 20:36:40 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com>

FYI, I pointed a correspondent to Neil's new generator patch (among other
things), and got this back.  Not being a Web Guy at heart, I don't have a
clue about XSLT (just enough to know that 4-letter acronyms are a webb
abomination <wink>).

Note:  in earlier correspondence, the generator idea didn't seem to "click"
until I called them "resumable functions" (as I often did in the past, but
fell out of the habit).  People new to the concept often pick that up
quicker, or even, as in this case, remember that they once rolled such a
thing by hand out of prior necessity.

Anyway, possibly food for thought if XSLT means something to you ...


-----Original Message-----
From: XXX
Sent: Saturday, March 17, 2001 8:09 PM
To: Tim Peters
Subject: Re: FW: [Python-Dev] Simple generator implementation


On Sat, 17 Mar 2001, Tim Peters wrote:
> It's been done at least three times by now, most recently yesterday(!):

Thanks for the pointer.  I've started to read some
of the material you pointed me to... generators
are indeed very interesting.  They are what is
needed for an efficient implementation of XSLT.
(I was part of an XSLT implementation team that had to
dream up essentially the same solution). This is
all very cool.  Glad to see that I'm just re-inventing
the wheel.  Let's get generators in Python!

;) XXX



From paulp@ActiveState.com  Sun Mar 18 01:50:39 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Sat, 17 Mar 2001 17:50:39 -0800
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com>
Message-ID: <3AB4146E.62AE3299@ActiveState.com>

I would call what you need for an efficient XSLT implementation "lazy
lists." They are never infinite but you would rather not pre-compute
them in advance. Often you use only the first item. Iterators probably
would be a good implementation technique.
-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From nas@arctrix.com  Sun Mar 18 02:17:41 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Sat, 17 Mar 2001 18:17:41 -0800
Subject: [Python-Dev] Simple generators, round 2
Message-ID: <20010317181741.B12195@glacier.fnational.com>

I've got a different implementation.  There are no new keywords
and its simpler to wrap a high level interface around the low
interface.

    http://arctrix.com/nas/python/generator2.diff

What the patch does:

    Split the big for loop and switch statement out of eval_code2
    into PyEval_EvalFrame.

    Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
    WHY_RETURN except that the frame value stack and the block stack
    are not touched.  The frame is also marked resumable before
    returning (f_stackbottom != NULL).

    Add two new methods to frame objects, suspend and resume.
    suspend takes one argument which gets attached to the frame
    (f_suspendvalue).  This tells ceval to suspend as soon as control
    gets back to this frame.  resume, strangely enough, resumes a
    suspended frame.  Execution continues at the point it was
    suspended.  This is done by calling PyEval_EvalFrame on the frame
    object.

    Make frame_dealloc clean up the stack and decref f_suspendvalue
    if it exists.

There are probably still bugs and it slows down ceval too much
but otherwise things are looking good.  Here are some examples
(the're a little long and but illustrative).  Low level
interface, similar to my last example:

    # print 0 to 999
    import sys

    def g():
        for n in range(1000):
            f = sys._getframe()
            f.suspend((n, f))
        return None, None

    n, frame = g()
    while frame:
        print n
        n, frame = frame.resume()

Let's build something easier to use:

    # Generator.py
    import sys

    class Generator:
        def __init__(self):
            self.frame = sys._getframe(1)
            self.frame.suspend(self)
            
        def suspend(self, value):
            self.frame.suspend(value)

        def end(self):
            raise IndexError

        def __getitem__(self, i):
            # fake indices suck, need iterators
            return self.frame.resume()

Now let's try Guido's pi example now:

    # Prints out the frist 100 digits of pi
    from Generator import Generator

    def pi():
        g = Generator()
        k, a, b, a1, b1 = 2L, 4L, 1L, 12L, 4L
        while 1:
            # Next approximation
            p, q, k = k*k, 2L*k+1L, k+1L
            a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
            # Print common digits
            d, d1 = a/b, a1/b1
            while d == d1:
                g.suspend(int(d))
                a, a1 = 10L*(a%b), 10L*(a1%b1)
                d, d1 = a/b, a1/b1

    def test():
        pi_digits = pi()
        for i in range(100):
            print pi_digits[i],

    if __name__ == "__main__":
        test()

Some tree traversals:

    from types import TupleType
    from Generator import Generator

    # (A - B) + C * (E/F)
    expr = ("+", 
             ("-", "A", "B"),
             ("*", "C",
                  ("/", "E", "F")))
               
    def postorder(node):
        g = Generator()
        if isinstance(node, TupleType):
            value, left, right = node
            for child in postorder(left):
                g.suspend(child)
            for child in postorder(right):
                g.suspend(child)
            g.suspend(value)
        else:
            g.suspend(node)
        g.end()

    print "postorder:",
    for node in postorder(expr):
        print node,
    print

This prints:

    postorder: A B - C E F / * +

Cheers,

  Neil


From aahz@pobox.com (Aahz Maruch)  Sun Mar 18 06:31:39 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Sat, 17 Mar 2001 22:31:39 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010318000109.M27808@xs4all.nl> from "Thomas Wouters" at Mar 18, 2001 12:01:09 AM
Message-ID: <200103180631.BAA03321@panix3.panix.com>

>> Remember that all bugfixes are available as patches off of SourceForge.
> 
> I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
> true, it's very not true. A lot of the patches applied are either never
> submitted to SF (because it's the 'obvious fix' by one of the commiters) or
> are modified to some extent from thh SF patch proposed. (Often
> formatting/code style, fairly frequently symbol renaming, and not too
> infrequently changes in the logic for various reasons.)

I'm thinking one of us is confused.  CVS is hosted at SourceForge,
right?  People can download specific parts of Python from SF?  And we're
presuming there will be a specific fork that patches are checked in to?
So in what way is my statement not true?

>>> ... that would make 10+ 'releases' of various form in those 6 months.
>>> Ain't no-one[*] going to check them out for a decent spin, they'll just
>>> wait for the final version.
>> 
>> That's why I'm making the beta cycle artificially long (I'd even vote
>> for a two-month minimum).  It slows the release pace and -- given the
>> usually high quality of Python betas -- it encourages people to try them
>> out.  I believe that we *do* need patch betas for system testing.
> 
> But having a patch release once every 6 months negates the whole
> purpose of patch releases :) If you are in need of a bugfix, you
> don't want to wait three months before a bugfix release beta with
> your specific bug fixed is going to be released, and you don't want
> to wait two months more for the release to become final. (Note: we're
> talking people who don't want to use the next feature release beta or
> current CVS version, so they aren't likely to try a bugfix release
> beta either.) Bugfix releases should come often-ish, compared to
> feature releases. But maybe we can get the BDFL to slow the pace of
> feature releases instead ? Is the 6-month speedway really appropriate
> if we have a separate bugfix release track ?

Well, given that neither of us is arguing on the basis of actual
experience with Python patch releases, there's no way we can prove one
point of view as being better than the other.  Tell you what, though:
take the job of Patch Czar, and I'll follow your lead.  I'll just
reserve the right to say "I told you so".  ;-)

>>> I'm also for starting the maintenance branch right after the real release,
>>> and start adding bugfixes to it right away, as soon as they show up. Keeping
>>> up to date on bufixes to the head branch is then as 'simple' as watching
>>> python-checkins. (Up until the fact a whole subsystem gets rewritten, that
>>> is :) People should still be able to submit bugfixes for the maintenance
>>> branch specifically.
> 
>> That is *precisely* why my original proposal suggested that only the N-1
>> release get patch attention, to conserve effort.  It is also why I
>> suggested that patch releases get hooked to feature releases.
> 
> There is no technical reason to do just N-1. You can branch of as often as
> you want (in fact, branches never disappear, so if we were building 3.5 ten
> years from now (and we would still be using CVS <wink GregS>) we could apply
> a specific patch to the 2.0 maintenance branch and release 2.0.128, if need
> be.)

No technical reason, no.  It's just that N-1 is going to be similar
enough to N, particularly for any given bugfix, that it should be
"trivial" to keep the bugfixes in synch.  That's all.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From esr@snark.thyrsus.com  Sun Mar 18 06:46:28 2001
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 18 Mar 2001 01:46:28 -0500
Subject: [Python-Dev] Followup on freezetools error
Message-ID: <200103180646.f2I6kSV16765@snark.thyrsus.com>

OK, so following Guido's advice I did a CVS update and reinstall and
then tried a freeze on the CML2 compiler.  Result:

Traceback (most recent call last):
  File "freezetools/freeze.py", line 460, in ?
    main()
  File "freezetools/freeze.py", line 321, in main
    mf.import_hook(mod)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 302, in scan_code
    self.scan_code(c, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 288, in scan_code
    assert lastname is not None
AssertionError
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Question with boldness even the existence of a God; because, if there
be one, he must more approve the homage of reason, than that of
blindfolded fear.... Do not be frightened from this inquiry from any
fear of its consequences. If it ends in the belief that there is no
God, you will find incitements to virtue in the comfort and
pleasantness you feel in its exercise...
	-- Thomas Jefferson, in a 1787 letter to his nephew


From esr@snark.thyrsus.com  Sun Mar 18 07:06:08 2001
From: esr@snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 18 Mar 2001 02:06:08 -0500
Subject: [Python-Dev] Re: Followup on freezetools error
Message-ID: <200103180706.f2I768q17436@snark.thyrsus.com>

Cancel previous complaint.  Pilot error.  I think I'm going to end up
writing some documentation for this puppy...
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

You know why there's a Second Amendment?  In case the government fails to
follow the first one.
         -- Rush Limbaugh, in a moment of unaccustomed profundity 17 Aug 1993


From pedroni@inf.ethz.ch  Sun Mar 18 12:01:40 2001
From: pedroni@inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 13:01:40 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com>
Message-ID: <001901c0afa3$322094e0$f979fea9@newmexico>

This kind of low level impl. where suspension points are known at runtime only,
cannot be implemented in jython
(at least not in a non costly and reasonable way).
Jython codebase is likely to just allow generators with suspension points known
at compilation time.

regards.

----- Original Message -----
From: Neil Schemenauer <nas@arctrix.com>
To: <python-dev@python.org>
Sent: Sunday, March 18, 2001 3:17 AM
Subject: [Python-Dev] Simple generators, round 2


> I've got a different implementation.  There are no new keywords
> and its simpler to wrap a high level interface around the low
> interface.
>
>     http://arctrix.com/nas/python/generator2.diff
>
> What the patch does:
>
>     Split the big for loop and switch statement out of eval_code2
>     into PyEval_EvalFrame.
>
>     Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
>     WHY_RETURN except that the frame value stack and the block stack
>     are not touched.  The frame is also marked resumable before
>     returning (f_stackbottom != NULL).
>
>     Add two new methods to frame objects, suspend and resume.
>     suspend takes one argument which gets attached to the frame
>     (f_suspendvalue).  This tells ceval to suspend as soon as control
>     gets back to this frame.  resume, strangely enough, resumes a
>     suspended frame.  Execution continues at the point it was
>     suspended.  This is done by calling PyEval_EvalFrame on the frame
>     object.
>
>     Make frame_dealloc clean up the stack and decref f_suspendvalue
>     if it exists.
>
> There are probably still bugs and it slows down ceval too much
> but otherwise things are looking good.  Here are some examples
> (the're a little long and but illustrative).  Low level
> interface, similar to my last example:
>
>     # print 0 to 999
>     import sys
>
>     def g():
>         for n in range(1000):
>             f = sys._getframe()
>             f.suspend((n, f))
>         return None, None
>
>     n, frame = g()
>     while frame:
>         print n
>         n, frame = frame.resume()
>
> Let's build something easier to use:
>
>     # Generator.py
>     import sys
>
>     class Generator:
>         def __init__(self):
>             self.frame = sys._getframe(1)
>             self.frame.suspend(self)
>
>         def suspend(self, value):
>             self.frame.suspend(value)
>
>         def end(self):
>             raise IndexError
>
>         def __getitem__(self, i):
>             # fake indices suck, need iterators
>             return self.frame.resume()
>
> Now let's try Guido's pi example now:
>
>     # Prints out the frist 100 digits of pi
>     from Generator import Generator
>
>     def pi():
>         g = Generator()
>         k, a, b, a1, b1 = 2L, 4L, 1L, 12L, 4L
>         while 1:
>             # Next approximation
>             p, q, k = k*k, 2L*k+1L, k+1L
>             a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
>             # Print common digits
>             d, d1 = a/b, a1/b1
>             while d == d1:
>                 g.suspend(int(d))
>                 a, a1 = 10L*(a%b), 10L*(a1%b1)
>                 d, d1 = a/b, a1/b1
>
>     def test():
>         pi_digits = pi()
>         for i in range(100):
>             print pi_digits[i],
>
>     if __name__ == "__main__":
>         test()
>
> Some tree traversals:
>
>     from types import TupleType
>     from Generator import Generator
>
>     # (A - B) + C * (E/F)
>     expr = ("+",
>              ("-", "A", "B"),
>              ("*", "C",
>                   ("/", "E", "F")))
>
>     def postorder(node):
>         g = Generator()
>         if isinstance(node, TupleType):
>             value, left, right = node
>             for child in postorder(left):
>                 g.suspend(child)
>             for child in postorder(right):
>                 g.suspend(child)
>             g.suspend(value)
>         else:
>             g.suspend(node)
>         g.end()
>
>     print "postorder:",
>     for node in postorder(expr):
>         print node,
>     print
>
> This prints:
>
>     postorder: A B - C E F / * +
>
> Cheers,
>
>   Neil
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
>




From fdrake@acm.org  Sun Mar 18 14:23:23 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Sun, 18 Mar 2001 09:23:23 -0500 (EST)
Subject: [Python-Dev] Re: Followup on freezetools error
In-Reply-To: <200103180706.f2I768q17436@snark.thyrsus.com>
References: <200103180706.f2I768q17436@snark.thyrsus.com>
Message-ID: <15028.50395.414064.239096@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > Cancel previous complaint.  Pilot error.  I think I'm going to end up
 > writing some documentation for this puppy...

Eric,
  So how often would you like reminders?  ;-)
  I think a "howto" format document would be great; I'm sure we could
find a place for it in the standard documentation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From guido@digicool.com  Sun Mar 18 15:01:50 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 10:01:50 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: Your message of "Sun, 18 Mar 2001 00:26:45 +0100."
 <20010318002645.H29286@xs4all.nl>
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
 <20010318002645.H29286@xs4all.nl>
Message-ID: <200103181501.KAA22545@cj20424-a.reston1.va.home.com>

> What Paul means is that he's added a new file to his tree, and wants to send
> in a patch that includes that file. Unfortunately, CVS can't do that :P You
> have two choices:
> 
> - 'cvs add' the file, but don't commit. This is kinda lame since it requires
>  commit access, and it creates the administrativia for the file already. I
>  *think* that if you do this, only you can actually add the file (after the
>  patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
>  show the file (as all +'es, obviously) even though it will complain to
>  stderr about its ignorance about that specific file.

No, cvs diff still won't diff the file -- it says "new file".

> - Don't use cvs diff. Use real diff instead. Something like this:

Too much work to create a new tree.

What I do: I usually *know* what are the new files.  (If you don't,
consider getting a little more organized first :-).  Then do a regular
diff -c between /dev/null and each of the new files, and append that
to the CVS-generated diff.  Patch understands diffs between /dev/null
and a regular file and understands that this means to add the file.

(I have no idea what the rest of this thread is about.  Dinkytoy
attitude???  I played with tpy cars called dinky toys, but I don't see
the connection.  What SF FAQ are we talking about anyway?)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From barry@digicool.com  Sun Mar 18 16:22:38 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 11:22:38 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
 <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
 <20010318002645.H29286@xs4all.nl>
Message-ID: <15028.57550.447075.226874@anthem.wooz.org>

>>>>> "TP" == Tim Peters <tim.one@home.com> writes:

    TP> I'm always amused that Unix users never allow the limitations
    TP> of their tools to convince them to do something obvious
    TP> instead.

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> - Don't use cvs diff. Use real diff instead. Something like
    TW> this:

    TW>   mv your tree asside, (can just mv your 'src' dir to
    TW> 'src.mypatch' or such) cvs update -d, make distclean in your
    TW> old tree, diff -crN --exclude=CVS src src.mypatch >
    TW> mypatch.diff

Why not try the "obvious" thing <wink>?

    % cvs diff -uN <rev-switches>

(Okay this also generates unified diffs, but I'm starting to find them
more readable than context diffs anyway.)

I seem to recall actually getting this to work effortlessly when I
generated the Mailman 2.0.3 patch (which contained the new file
README.secure_linux).

Yup, looking at the uploaded SF patch

    http://ftp1.sourceforge.net/mailman/mailman-2.0.2-2.0.3.diff

that file's in there, and diffed against /dev/null, so it's added by
`+' the whole file.

-Barry


From thomas@xs4all.net  Sun Mar 18 16:49:25 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 17:49:25 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <200103181501.KAA22545@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Mar 18, 2001 at 10:01:50AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <200103181501.KAA22545@cj20424-a.reston1.va.home.com>
Message-ID: <20010318174924.N27808@xs4all.nl>

On Sun, Mar 18, 2001 at 10:01:50AM -0500, Guido van Rossum wrote:
> > What Paul means is that he's added a new file to his tree, and wants to send
> > in a patch that includes that file. Unfortunately, CVS can't do that :P You
> > have two choices:
> > 
> > - 'cvs add' the file, but don't commit. This is kinda lame since it requires
> >  commit access, and it creates the administrativia for the file already. I
> >  *think* that if you do this, only you can actually add the file (after the
> >  patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
> >  show the file (as all +'es, obviously) even though it will complain to
> >  stderr about its ignorance about that specific file.

> No, cvs diff still won't diff the file -- it says "new file".

Hm, you're right. I'm sure I had it working, but it doesn't work now. Odd. I
guess Barry got hit by the same oddity (see other reply to my msg ;)

> (I have no idea what the rest of this thread is about.  Dinkytoy
> attitude???  I played with tpy cars called dinky toys, but I don't see
> the connection.  What SF FAQ are we talking about anyway?)

The thread started by Paul asking why his question wasn't in the FAQ :) As
for 'dinkytoy attitude': it's a great, wonderful toy, but you can't use it
for real. A bit harsh, I guess, but I've been hitting the CVS constraints
many times in the last two weeks. (Moving files, moving directories,
removing directories 'for real', moving between different repositories in
which some files/directories (or just their names) overlap, making diffs
with new files in them ;) etc.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@digicool.com  Sun Mar 18 16:53:25 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 11:53:25 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sat, 17 Mar 2001 22:31:39 PST."
 <200103180631.BAA03321@panix3.panix.com>
References: <200103180631.BAA03321@panix3.panix.com>
Message-ID: <200103181653.LAA22789@cj20424-a.reston1.va.home.com>

> >> Remember that all bugfixes are available as patches off of SourceForge.
> > 
> > I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
> > true, it's very not true. A lot of the patches applied are either never
> > submitted to SF (because it's the 'obvious fix' by one of the commiters) or
> > are modified to some extent from thh SF patch proposed. (Often
> > formatting/code style, fairly frequently symbol renaming, and not too
> > infrequently changes in the logic for various reasons.)
> 
> I'm thinking one of us is confused.  CVS is hosted at SourceForge,
> right?  People can download specific parts of Python from SF?  And we're
> presuming there will be a specific fork that patches are checked in to?
> So in what way is my statement not true?

Ah...  Thomas clearly thought you meant the patch manager, and you
didn't make it too clear that's not what you meant.  Yes, they are of
course all available as diffs -- and notice how I use this fact in the
2.0 patches lists in the 2.0 wiki, e.g. on
http://www.python.org/cgi-bin/moinmoin/CriticalPatches.

> >>> ... that would make 10+ 'releases' of various form in those 6 months.
> >>> Ain't no-one[*] going to check them out for a decent spin, they'll just
> >>> wait for the final version.
> >> 
> >> That's why I'm making the beta cycle artificially long (I'd even vote
> >> for a two-month minimum).  It slows the release pace and -- given the
> >> usually high quality of Python betas -- it encourages people to try them
> >> out.  I believe that we *do* need patch betas for system testing.
> > 
> > But having a patch release once every 6 months negates the whole
> > purpose of patch releases :) If you are in need of a bugfix, you
> > don't want to wait three months before a bugfix release beta with
> > your specific bug fixed is going to be released, and you don't want
> > to wait two months more for the release to become final. (Note: we're
> > talking people who don't want to use the next feature release beta or
> > current CVS version, so they aren't likely to try a bugfix release
> > beta either.) Bugfix releases should come often-ish, compared to
> > feature releases. But maybe we can get the BDFL to slow the pace of
> > feature releases instead ? Is the 6-month speedway really appropriate
> > if we have a separate bugfix release track ?
> 
> Well, given that neither of us is arguing on the basis of actual
> experience with Python patch releases, there's no way we can prove one
> point of view as being better than the other.  Tell you what, though:
> take the job of Patch Czar, and I'll follow your lead.  I'll just
> reserve the right to say "I told you so".  ;-)

It seems I need to butt in here.  :-)

I like the model used by Tcl.  They have releases with a 6-12 month
release cycle, 8.0, 8.1, 8.2, 8.3, 8.4.  These have serious alpha and
beta cycles (three of each typically).  Once a release is out, the
issue occasional patch releases, e.g. 8.2.1, 8.2.2, 8.2.3; these are
about a month apart.  The latter bugfixes overlap with the early alpha
releases of the next major release.  I see no sign of beta cycles for
the patch releases.  The patch releases are *very* conservative in
what they add -- just bugfixes, about 5-15 per bugfix release.  They
seem to add the bugfixes to the patch branch as soon as they get them,
and they issue patch releases as soon as they can.

I like this model a lot.  Aahz, if you want to, you can consider this
a BDFL proclamation -- can you add this to your PEP?

> >>> I'm also for starting the maintenance branch right after the
> >>> real release, and start adding bugfixes to it right away, as
> >>> soon as they show up. Keeping up to date on bufixes to the head
> >>> branch is then as 'simple' as watching python-checkins. (Up
> >>> until the fact a whole subsystem gets rewritten, that is :)
> >>> People should still be able to submit bugfixes for the
> >>> maintenance branch specifically.
> > 
> >> That is *precisely* why my original proposal suggested that only
> >> the N-1 release get patch attention, to conserve effort.  It is
> >> also why I suggested that patch releases get hooked to feature
> >> releases.
> > 
> > There is no technical reason to do just N-1. You can branch of as
> > often as you want (in fact, branches never disappear, so if we
> > were building 3.5 ten years from now (and we would still be using
> > CVS <wink GregS>) we could apply a specific patch to the 2.0
> > maintenance branch and release 2.0.128, if need be.)
> 
> No technical reason, no.  It's just that N-1 is going to be similar
> enough to N, particularly for any given bugfix, that it should be
> "trivial" to keep the bugfixes in synch.  That's all.

I agree.  The Tcl folks never issue patch releases when they've issued
a new major release (in fact the patch releases seem to stop long
before they're ready to issue the next major release).  I realize that
we're way behind with 2.0.1 -- since this is the first time we're
doing this, that's OK for now, but in the future I like the Tcl
approach a lot!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From thomas@xs4all.net  Sun Mar 18 17:03:10 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 18:03:10 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <15028.57550.447075.226874@anthem.wooz.org>; from barry@digicool.com on Sun, Mar 18, 2001 at 11:22:38AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <15028.57550.447075.226874@anthem.wooz.org>
Message-ID: <20010318180309.P27808@xs4all.nl>

On Sun, Mar 18, 2001 at 11:22:38AM -0500, Barry A. Warsaw wrote:

> Why not try the "obvious" thing <wink>?

>     % cvs diff -uN <rev-switches>

That certainly doesn't work. 'cvs' just gives a '? Filename' line for that
file, then. I just figured out why the 'cvs add <file>; cvs diff -cN' trick
worked before: it works with CVS 1.11 (which is what's in Debian unstable),
but not with CVS 1.10.8 (which is what's in RH7.) But you really have to use
'cvs add' before doing the diff. (So I'll take back *some* of the dinkytoy
comment ;)

> I seem to recall actually getting this to work effortlessly when I
> generated the Mailman 2.0.3 patch (which contained the new file
> README.secure_linux).

Ah, but you had already added and commited that file. Paul wants to do it to
submit a patch to SF, so checking it in to do that is probably not what he
meant. ;-P

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Sun Mar 18 17:07:18 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 18:07:18 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103181653.LAA22789@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Mar 18, 2001 at 11:53:25AM -0500
References: <200103180631.BAA03321@panix3.panix.com> <200103181653.LAA22789@cj20424-a.reston1.va.home.com>
Message-ID: <20010318180718.Q27808@xs4all.nl>

On Sun, Mar 18, 2001 at 11:53:25AM -0500, Guido van Rossum wrote:

> I like the Tcl approach a lot!

Me, too. I didn't know they did it like that, but it makes sense to me :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From barry@digicool.com  Sun Mar 18 17:18:31 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 12:18:31 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
 <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
 <20010318002645.H29286@xs4all.nl>
 <200103181501.KAA22545@cj20424-a.reston1.va.home.com>
 <20010318174924.N27808@xs4all.nl>
Message-ID: <15028.60903.326987.679071@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    TW> The thread started by Paul asking why his question wasn't in
    TW> the FAQ :) As for 'dinkytoy attitude': it's a great, wonderful
    TW> toy, but you can't use it for real. A bit harsh, I guess, but
    TW> I've been hitting the CVS constraints many times in the last
    TW> two weeks. (Moving files, moving directories, removing
    TW> directories 'for real', moving between different repositories
    TW> in which some files/directories (or just their names) overlap,
    TW> making diffs with new files in them ;) etc.)

Was it Greg Wilson who said at IPC8 that CVS was the worst tool that
everybody uses (or something like that)?

-Barry


From guido@digicool.com  Sun Mar 18 17:21:03 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 12:21:03 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: Your message of "Sun, 18 Mar 2001 17:49:25 +0100."
 <20010318174924.N27808@xs4all.nl>
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <200103181501.KAA22545@cj20424-a.reston1.va.home.com>
 <20010318174924.N27808@xs4all.nl>
Message-ID: <200103181721.MAA23196@cj20424-a.reston1.va.home.com>

> > No, cvs diff still won't diff the file -- it says "new file".
> 
> Hm, you're right. I'm sure I had it working, but it doesn't work now. Odd. I
> guess Barry got hit by the same oddity (see other reply to my msg ;)

Barry posted the right solution: cvs diff -c -N.  The -N option treats
absent files as empty.  I'll use this in the future!

> > (I have no idea what the rest of this thread is about.  Dinkytoy
> > attitude???  I played with tpy cars called dinky toys, but I don't see
> > the connection.  What SF FAQ are we talking about anyway?)
> 
> The thread started by Paul asking why his question wasn't in the FAQ :) As
> for 'dinkytoy attitude': it's a great, wonderful toy, but you can't use it
> for real. A bit harsh, I guess, but I've been hitting the CVS constraints
> many times in the last two weeks. (Moving files, moving directories,
> removing directories 'for real', moving between different repositories in
> which some files/directories (or just their names) overlap, making diffs
> with new files in them ;) etc.)

Note that at least *some* of the constraints have to do with issues
inherent in version control.  And cvs diff -N works. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Sun Mar 18 17:23:35 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 12:23:35 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sun, 18 Mar 2001 18:07:18 +0100."
 <20010318180718.Q27808@xs4all.nl>
References: <200103180631.BAA03321@panix3.panix.com> <200103181653.LAA22789@cj20424-a.reston1.va.home.com>
 <20010318180718.Q27808@xs4all.nl>
Message-ID: <200103181723.MAA23240@cj20424-a.reston1.va.home.com>

[me]
> > I like the Tcl approach a lot!

[Thomas]
> Me, too. I didn't know they did it like that, but it makes sense to me :)

Ok, you are hereby nominated to be the 2.0.1 patch Czar.

(You saw that coming, right? :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From barry@digicool.com  Sun Mar 18 17:28:44 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 12:28:44 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
 <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
 <20010318002645.H29286@xs4all.nl>
 <15028.57550.447075.226874@anthem.wooz.org>
 <20010318180309.P27808@xs4all.nl>
Message-ID: <15028.61516.717449.55864@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas@xs4all.net> writes:

    >> I seem to recall actually getting this to work effortlessly
    >> when I generated the Mailman 2.0.3 patch (which contained the
    >> new file README.secure_linux).

    TW> Ah, but you had already added and commited that file. Paul
    TW> wants to do it to submit a patch to SF, so checking it in to
    TW> do that is probably not what he meant. ;-P

Ah, you're right.  I'd missed Paul's original message.  Who am I to
argue that CVS doesn't suck? :)

-Barry


From paulp@ActiveState.com  Sun Mar 18 18:01:43 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Sun, 18 Mar 2001 10:01:43 -0800
Subject: [Python-Dev] Sourceforge FAQ
References: <LNBBLJKPBEHFEDALKOLCMEFOJGAA.tim.one@home.com>
Message-ID: <3AB4F807.4EAAD9FF@ActiveState.com>

Tim Peters wrote:
> 

> No:  as my signoff line implied, switch to Windows and tell Tim to deal with
> it.  Works for everyone except me <wink>!  I was just tweaking you.  For a
> patch on SF, it should be enough to just attach the new files and leave a
> comment saying where they belong.

Well, I'm going to bite just one more time. As near as I could see, a
patch on allows the submission of a single file. What I did to get
around this (seemed obvious at the time) was put the contents of the
file (because it was small) in the comment field and attach the "rest of
the patch."

Then I wanted to update the file but comments are added, not replace so
changes were quickly going to become nasty.

I'm just glad that the answer was sufficiently subtle that it generated
a new thread. I didn't miss anything obvious. :)
-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From martin@loewis.home.cs.tu-berlin.de  Sun Mar 18 18:39:48 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 18 Mar 2001 19:39:48 +0100
Subject: [Python-Dev] Sourceforge FAQ
Message-ID: <200103181839.f2IIdm101115@mira.informatik.hu-berlin.de>

> As near as I could see, a patch on allows the submission of a single
> file.

That was true with the old patch manager; the new tool can have
multiple artefacts per report. So I guess the proper procedure now is
to attach new files separately (or to build an archive of the new
files and to attach that separately). That requires no funny diffs
against /dev/null and works on VMS, ummh, Windows also.

Regards,
Martin


From aahz@pobox.com (Aahz Maruch)  Sun Mar 18 19:42:30 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Sun, 18 Mar 2001 11:42:30 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 18, 2001 11:53:25 AM
Message-ID: <200103181942.OAA08158@panix3.panix.com>

Guido:
>Aahz:
>>
>>    [to Thomas Wouters]
>>
>> I'm thinking one of us is confused.  CVS is hosted at SourceForge,
>> right?  People can download specific parts of Python from SF?  And we're
>> presuming there will be a specific fork that patches are checked in to?
>> So in what way is my statement not true?
> 
> Ah...  Thomas clearly thought you meant the patch manager, and you
> didn't make it too clear that's not what you meant.  Yes, they are of
> course all available as diffs -- and notice how I use this fact in the
> 2.0 patches lists in the 2.0 wiki, e.g. on
> http://www.python.org/cgi-bin/moinmoin/CriticalPatches.

Of course I didn't make it clear, because I have no clue what I'm
talking about.  ;-)  And actually, I was talking about simply
downloading complete replacements for specific Python source files.

But that seems to be irrelevent to our current path, so I'll shut up now.

>> Well, given that neither of us is arguing on the basis of actual
>> experience with Python patch releases, there's no way we can prove one
>> point of view as being better than the other.  Tell you what, though:
>> take the job of Patch Czar, and I'll follow your lead.  I'll just
>> reserve the right to say "I told you so".  ;-)
> 
> It seems I need to butt in here.  :-)
> 
> I like the model used by Tcl.  They have releases with a 6-12 month
> release cycle, 8.0, 8.1, 8.2, 8.3, 8.4.  These have serious alpha and
> beta cycles (three of each typically).  Once a release is out, the
> issue occasional patch releases, e.g. 8.2.1, 8.2.2, 8.2.3; these are
> about a month apart.  The latter bugfixes overlap with the early alpha
> releases of the next major release.  I see no sign of beta cycles for
> the patch releases.  The patch releases are *very* conservative in
> what they add -- just bugfixes, about 5-15 per bugfix release.  They
> seem to add the bugfixes to the patch branch as soon as they get them,
> and they issue patch releases as soon as they can.
> 
> I like this model a lot.  Aahz, if you want to, you can consider this
> a BDFL proclamation -- can you add this to your PEP?

BDFL proclamation received.  It'll take me a little while to rewrite
this into an internally consistent PEP.  It would be helpful if you
pre-announced (to c.l.py.announce) the official change in feature release
policy (the 6-12 month target instead of a 6 month target).

>>Thomas Wouters:
>>> There is no technical reason to do just N-1. You can branch of as
>>> often as you want (in fact, branches never disappear, so if we
>>> were building 3.5 ten years from now (and we would still be using
>>> CVS <wink GregS>) we could apply a specific patch to the 2.0
>>> maintenance branch and release 2.0.128, if need be.)
>> 
>> No technical reason, no.  It's just that N-1 is going to be similar
>> enough to N, particularly for any given bugfix, that it should be
>> "trivial" to keep the bugfixes in synch.  That's all.
> 
> I agree.  The Tcl folks never issue patch releases when they've issued
> a new major release (in fact the patch releases seem to stop long
> before they're ready to issue the next major release).  I realize that
> we're way behind with 2.0.1 -- since this is the first time we're
> doing this, that's OK for now, but in the future I like the Tcl
> approach a lot!

Okie-doke.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From tim_one@email.msn.com  Sun Mar 18 19:49:17 2001
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 18 Mar 2001 14:49:17 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <3AB4F807.4EAAD9FF@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHOJGAA.tim_one@email.msn.com>

[Paul Prescod]
> Well, I'm going to bite just one more time. As near as I could see, a
> patch on allows the submission of a single file.

That *used* to be true.  Tons of stuff changed on SF recently, including the
ability to attach as many files to patches as you need.  Also to bug reports,
which previously didn't allow any file attachments.  These are all instances
of a Tracker now.  "Feature Requests" is a new Tracker.



From guido@digicool.com  Sun Mar 18 19:58:19 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 14:58:19 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sun, 18 Mar 2001 11:42:30 PST."
 <200103181942.OAA08158@panix3.panix.com>
References: <200103181942.OAA08158@panix3.panix.com>
Message-ID: <200103181958.OAA23418@cj20424-a.reston1.va.home.com>

> > I like this model a lot.  Aahz, if you want to, you can consider this
> > a BDFL proclamation -- can you add this to your PEP?
> 
> BDFL proclamation received.  It'll take me a little while to rewrite
> this into an internally consistent PEP.  It would be helpful if you
> pre-announced (to c.l.py.announce) the official change in feature release
> policy (the 6-12 month target instead of a 6 month target).

You're reading too much in it. :-)

I don't want to commit to a precise release interval anyway -- no two
releases are the same.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From aahz@pobox.com (Aahz Maruch)  Sun Mar 18 20:12:57 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Sun, 18 Mar 2001 12:12:57 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 18, 2001 02:58:19 PM
Message-ID: <200103182012.PAA04074@panix2.panix.com>

>> BDFL proclamation received.  It'll take me a little while to rewrite
>> this into an internally consistent PEP.  It would be helpful if you
>> pre-announced (to c.l.py.announce) the official change in feature release
>> policy (the 6-12 month target instead of a 6 month target).
> 
> You're reading too much in it. :-)

Mmmmm....  Probably.

> I don't want to commit to a precise release interval anyway -- no two
> releases are the same.

That's very good to hear.  Perhaps I'm alone in this perception, but it
has sounded to me as though there's a goal (if not a "precise" interval)
of a release every six months.  Here's a quote from you on c.l.py:

"Given our current pace of releases that should be about 6 months warning."

With your current posting frequency to c.l.py, such oracular statements
have some of the force of a Proclamation.  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J


From paulp@ActiveState.com  Sun Mar 18 21:12:45 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Sun, 18 Mar 2001 13:12:45 -0800
Subject: [Python-Dev] Sourceforge FAQ
References: <200103181839.f2IIdm101115@mira.informatik.hu-berlin.de>
Message-ID: <3AB524CD.67A0DEEA@ActiveState.com>

"Martin v. Loewis" wrote:
> 
> > As near as I could see, a patch on allows the submission of a single
> > file.
> 
> That was true with the old patch manager; the new tool can have
> multiple artefacts per report. 

The user interface really does not indicate that multiple files may be
attached. Do I just keep going back into the patch page, adding files?

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From guido@python.org  Sun Mar 18 22:43:27 2001
From: guido@python.org (Guido van Rossum)
Date: Sun, 18 Mar 2001 17:43:27 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
Message-ID: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>

[On c.l.py]
"Aahz Maruch" <aahz@panix.com> wrote in message
news:992tb4$qf5$1@panix2.panix.com...
> [cc'd to Barry Warsaw in case he wants to comment]

(I happen to be skimming c.l.py this lazy Sunday afternoon :-)

> In article <3ab4f320@nntp.server.uni-frankfurt.de>,
> Michael 'Mickey' Lauer  <mickey@Vanille.de> wrote:
> >
> >Hi. If I remember correctly PEP224 (the famous "attribute docstrings")
> >has only been postponed because Python 2.0 was in feature freeze
> >in August 2000. Will it be in 2.1 ? If not, what's the reason ? What
> >is needed for it to be included in 2.1 ?
>
> I believe it has been essentially superseded by PEP 232; I thought
> function attributes were going to be in 2.1, but I don't see any clear
> indication.

Actually, the attribute docstrings PEP is about a syntax for giving
non-function objects a docstring.  That's quite different than the function
attributes PEP.

The attribute docstring PEP didn't get in (and is unlikely to get in in its
current form) because I don't like the syntax much, *and* because the way to
look up the docstrings is weird and ugly: you'd have to use something like
instance.spam__doc__ or instance.__doc__spam (I forget which; they're both
weird and ugly).

I also expect that the doc-sig will be using the same syntax (string
literals in non-docstring positions) for a different purpose.  So I see
little chance for PEP 224.  Maybe I should just pronounce on this, and
declare the PEP rejected.

Unless Ping thinks this would be a really cool feature to be added to pydoc?
(Ping's going to change pydoc from importing the target module to scanning
its surce, I believe -- then he could add this feature without changing the
Python parser. :-)

--Guido van Rossum






From tim_one@email.msn.com  Sun Mar 18 22:48:38 2001
From: tim_one@email.msn.com (Tim Peters)
Date: Sun, 18 Mar 2001 17:48:38 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <3AB277C7.28FE9B9B@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>

[M.-A. Lemburg]
> Looking around some more on the web, I found that the GNU MP (GMP)
> lib has switched from being GPLed to LGPLed,

Right.

> meaning that it can actually be used by non-GPLed code as long as
> the source code for the GMP remains publically accessible.

Ask Stallman <0.9 wink>.

> ...
> Since the GMP offers arbitrary precision numbers and also has
> a rational number implementation I wonder if we could use it
> in Python to support fractions and arbitrary precision
> floating points ?!

Note that Alex Martelli runs the "General Multiprecision Python" project on
SourceForge:

    http://gmpy.sourceforge.net/

He had a severe need for fast rational arithmetic in his Python programs, so
starting wrapping the full GMP out of necessity.  I'm sorry to say that I
haven't had time to even download his code.

WRT floating point, GMP supports arbitrary-precision floats too, but not in a
way you're going to like:  they're binary floats, and do not deliver
platform-independent results.  That last point is subtle, because the docs
say:

    The precision of a calculation is defined as follows:  Compute the
    requested operation exactly (with "infinite precision"), and truncate
    the result to the destination variable precision.

Leaving aside that truncation is a bad idea, that *sounds*
platform-independent.  The trap is that GMP supports no way to specify the
precision of a float result exactly:  you can ask for any precision you like,
but the implementation reserves the right to *use* any precision whatsoever
that's at least as large as what you asked for.  And, in practice, they do
use more than you asked for, depending on the word size of the machine.  This
is in line with GMP's overriding goal of being fast, rather than consistent
or elegant.

GMP's int and rational facilities could be used to build platform-independent
decimal fp, though.  However, this doesn't get away from the string<->float
issues I covered before:  if you're going to use binary ints internally (and
GMP does), decimal_string<->decimal_float is quadratic time in both
directions.

Note too that GMP is a lot of code, and difficult to port due to its "speed
first" goals.  Making Python *rely* on it is thus dubious (GMP on a Palm
Pilot?  maybe ...).

> Here's pointer to what the GNU MP has to offer:
>
>   http://www.math.columbia.edu/online/gmp.html

The official home page (according to Torbjörn Granlund, GMP's dad) is

    http://www.swox.com/gmp/

> The existing mpz module only supports MP integers, but support
> for the other two types should only be a matter of hard work ;-).

Which Alex already did.  Now what <wink>?



From aleaxit@yahoo.com  Sun Mar 18 23:26:23 2001
From: aleaxit@yahoo.com (Alex Martelli)
Date: Mon, 19 Mar 2001 00:26:23 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>
Message-ID: <08e801c0b003$824f4f00$0300a8c0@arthur>

"Tim Peters" <tim_one@email.msn.com> writes:

> Note that Alex Martelli runs the "General Multiprecision Python" project
on
> SourceForge:
>
>     http://gmpy.sourceforge.net/
>
> He had a severe need for fast rational arithmetic in his Python programs,
so
> starting wrapping the full GMP out of necessity.  I'm sorry to say that I
> haven't had time to even download his code.

...and as for me, I haven't gotten around to prettying it up for beta
release yet (mostly the docs -- still just a plain textfile) as it's doing
what I need... but, I _will_ get a round tuit...


> WRT floating point, GMP supports arbitrary-precision floats too, but not
in a
> way you're going to like:  they're binary floats, and do not deliver
> platform-independent results.  That last point is subtle, because the docs
> say:
>
>     The precision of a calculation is defined as follows:  Compute the
>     requested operation exactly (with "infinite precision"), and truncate
>     the result to the destination variable precision.
>
> Leaving aside that truncation is a bad idea, that *sounds*
> platform-independent.  The trap is that GMP supports no way to specify the
> precision of a float result exactly:  you can ask for any precision you
like,

There's another free library that interoperates with GMP to remedy
this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
It's also LGPL.  I haven't looked much into it as it seems it's not been
ported to Windows yet (and that looks like quite a project) which is
the platform I'm currently using (and, rationals do what I need:-).

> > The existing mpz module only supports MP integers, but support
> > for the other two types should only be a matter of hard work ;-).
>
> Which Alex already did.  Now what <wink>?

Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
MPFR Python wrapper interoperating with GMPY, btw -- it lives at
http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
I can't run MPFR myself, as above explained).


Alex



_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com



From mal@lemburg.com  Mon Mar 19 00:07:17 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 01:07:17 +0100
Subject: [Python-Dev] Re: What has become of PEP224 (attribute docstrings) ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>
Message-ID: <3AB54DB5.52254EB6@lemburg.com>

Guido van Rossum wrote:
> ...
>
> The attribute docstring PEP didn't get in (and is unlikely to get in in its
> current form) because I don't like the syntax much, *and* because the way to
> look up the docstrings is weird and ugly: you'd have to use something like
> instance.spam__doc__ or instance.__doc__spam (I forget which; they're both
> weird and ugly).

It was the only way I could think of for having attribute doc-
strings behave in the same way as e.g. methods do, that is they
should respect the class hierarchy in much the same way. This is
obviously needed if you want to document not only the method interface
of a class, but also its attributes which could be accessible from
the outside.

I am not sure whether parsing the module would enable the same
sort of functionality unless Ping's code does it's own interpretation
of imports and base class lookups.

Note that the attribute doc string attribute names are really
secondary to the PEP. The main point is using the same syntax
for attribute doc-strings as we already use for classes, modules
and functions.

> I also expect that the doc-sig will be using the same syntax (string
> literals in non-docstring positions) for a different purpose. 

I haven't seen any mention of this on the doc-sig. Could you explain
what they intend to use them for ?

> So I see
> little chance for PEP 224.  Maybe I should just pronounce on this, and
> declare the PEP rejected.

Do you have an alternative approach which meets the design goals
of the PEP ?
 
> Unless Ping thinks this would be a really cool feature to be added to pydoc?
> (Ping's going to change pydoc from importing the target module to scanning
> its surce, I believe -- then he could add this feature without changing the
> Python parser. :-)

While Ping's code is way cool, I think we shouldn't forget that
other code will also want to do its own introspection, possibly
even at run-time which is certainly not possible by (re-)parsing the
source code every time.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From bogus@does.not.exist.com  Mon Mar 19 05:16:29 2001
From: bogus@does.not.exist.com ()
Date: Mon, 19 Mar 2001 02:16:29 -0300
Subject: [Python-Dev] MUDE SUA VIDA APARTIR DE AGORA
Message-ID: <E14es4u-00044G-00@mail.python.org>

    
ENTRE NESSA MAIS NOVA MANIA ONDE OS INTERNAUTAS 
GANHAM POR APENAS ESTAR CONECTADOS A INTERNET 
!!!! EU GANHO EM MEDIA CERCA DE 2 MIL REAIS MENSAL, 
ISSO MESMO !!! GANHE VOCE TAMBEM ... O QUE VOCE 
ESTA ESPERANDO ? 'E TOTALMENTE GRATIS, NAO CUSTA 
NADA TENTAR , VOCE PERDE APENAS 5 MINUTOS DE SEU 
TEMPO PARA SE CADASTRAR, POREM NO 1 MES VOCE JA 
VE O RESULTADO ( R$ 2.000,00 ) ISSO MESMO, ISSO E'+- O 
QUE EU TIRO MENSALMENTE, EXISTE PESSOAS QUE 
CONSEGUEM O DOBRO E ATE MESMO O TRIPLO !!!! BASTA 
ENTRAR EM UM DOS SITES ABAIXO PARA COMECAR A 
GANHAR -->

www.muitodinheiro.com
www.dinheiromole.com
www.granaajato.cjb.net


ENGLISH VERSION

$$$ MAKE A LOT OF MONEY $$$



Are you of those that thinks to win money in the internet it doesn't 
pass of a farce and what will you never receive anything? 

ENTER IN -

www.muitodinheiro.com
www.dinheiromole.com
www.granaajato.cjb.net


From tim.one@home.com  Mon Mar 19 05:26:27 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 19 Mar 2001 00:26:27 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <08e801c0b003$824f4f00$0300a8c0@arthur>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>

[Alex Martelli]
> ...
> There's another free library that interoperates with GMP to remedy
> this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
> It's also LGPL.  I haven't looked much into it as it seems it's not been
> ported to Windows yet (and that looks like quite a project) which is
> the platform I'm currently using (and, rationals do what I need:-).

Thanks for the pointer!  From a quick skim, good news & bad news (although
which is which may depend on POV):

+ The authors apparently believe their MPFR routines "should replace
  the MPF class in further releases of GMP".  Then somebody else will
  port them.

+ Allows exact specification of result precision (which will make the
  results 100% platform-independent, unlike GMP's).

+ Allows choice of IEEE 754 rounding modes (unlike GMP's truncation).

+ But is still binary floating-point.

Marc-Andre is especially interested in decimal fixed- and floating-point, and
even more specifically than that, of a flavor that will be efficient for
working with decimal types in databases (which I suspect-- but don't
know --means that I/O (conversion) costs are more important than computation
speed once converted).  GMP + MPFR don't really address the decimal part of
that.  Then again, MAL hasn't quantified any of his desires either <wink>; I
bet he'd be happier with a BCD-ish scheme.

> ...
> Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
> MPFR Python wrapper interoperating with GMPY, btw -- it lives at
> http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
> I can't run MPFR myself, as above explained).

OK, that amounts to ~200 lines of C code to wrap some of the MPFR functions
(exp, log, sqrt, sincos, agm, log2, pi, pow; many remain to be wrapped; and
they don't allow specifying precision yet).  So Pearu still has significant
work to do here, while MAL is wondering who in their right mind would want to
do *anything* with numbers except add them <wink>.

hmm-he's-got-a-good-point-there-ly y'rs  - tim



From dkwolfe@pacbell.net  Mon Mar 19 05:57:53 2001
From: dkwolfe@pacbell.net (Dan Wolfe)
Date: Sun, 18 Mar 2001 21:57:53 -0800
Subject: [Python-Dev] Makefile woos..
Message-ID: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>

While compiling the the 2.0b1 release on my shine new Mac OS X box=20
today, I noticed that the fcntl module was breaking, so I went hunting=20=

for the cause...  (it was better than working on my taxes!)....

To make a long story short... I should have worked on my taxes =96 at=20
least =96 80% probability =96 I understand those...

Ok, the reason that the fcntl module was breaking was that uname now=20
reports Darwin 1.3 and it wasn't in the list... in the process of fixing=20=

that and testing to make sure that it was going to work correctly, I=20
discovered that sys.platform was reporting that I was on a darwin1=20
platform.... humm where did that come from...

It turns out that the MACHDEP is set correctly to Darwin1.3 when=20
configuration queries the system... however, during the process of=20
converting makefile.pre.in to makefile it passes thru the following SED=20=

script that starts around line 6284 of the configuration file:

sed 's/%@/@@/; s/@%/@@/; s/%g\$/@g/; /@g\$/s/[\\\\&%]/\\\\&/g;
  s/@@/%@/; s/@@/@%/; s/@g\$/%g/' > conftest.subs <<\\CEOF

which when applied to the Makefile.pre.in results in

MACHDEP =3D darwin1 instead of MACHDEP =3D darwin1.3

Question 1: I'm not geeky enough to understand why the '.3' get's=20
removed.... is there a problem with the SED script? or did I overlook=20
something?
Question 2: I noticed that all the other versions are=20
<OS><MajorRevision> also - is this intentional? or is this just a result=20=

of the bug in the SED script

If someone can help me understand what's going on here, I'll be glad to=20=

submit the patch to fix the fcntl module and a few others on Mac OS X.

- Dan - who probably would have finished off his taxes if he hadn't=20
opened this box....


From greg@cosc.canterbury.ac.nz  Mon Mar 19 03:02:55 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 19 Mar 2001 15:02:55 +1200 (NZST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB1ECEA.CD0FFC51@tismer.com>
Message-ID: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer@tismer.com>:

> But stopping the interpreter is a perfect unwind, and we
> can start again from anywhere.

Hmmm... Let me see if I have this correct.

You can switch from uthread A to uthread B as long
as the current depth of interpreter nesting is the
same as it was when B was last suspended. It doesn't
matter if the interpreter has returned and then
been called again, as long as it's at the same
level of nesting on the C stack.

Is that right? Is that the only restriction?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From uche.ogbuji@fourthought.com  Mon Mar 19 07:09:46 2001
From: uche.ogbuji@fourthought.com (Uche Ogbuji)
Date: Mon, 19 Mar 2001 00:09:46 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation
In-Reply-To: Message from "Tim Peters" <tim.one@home.com>
 of "Sat, 17 Mar 2001 20:36:40 EST." <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com>
Message-ID: <200103190709.AAA10053@localhost.localdomain>

> FYI, I pointed a correspondent to Neil's new generator patch (among other
> things), and got this back.  Not being a Web Guy at heart, I don't have a
> clue about XSLT (just enough to know that 4-letter acronyms are a webb
> abomination <wink>).
> 
> Note:  in earlier correspondence, the generator idea didn't seem to "click"
> until I called them "resumable functions" (as I often did in the past, but
> fell out of the habit).  People new to the concept often pick that up
> quicker, or even, as in this case, remember that they once rolled such a
> thing by hand out of prior necessity.
> 
> Anyway, possibly food for thought if XSLT means something to you ...

Quite interesting.  I brought up this *exact* point at the Stackless BOF at 
IPC9.  I mentioned that the immediate reason I was interested in Stackless was 
to supercharge the efficiency of 4XSLT.  I think that a stackless 4XSLT could 
pretty much annihilate the other processors in the field for performance.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji@fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python




From uche.ogbuji@fourthought.com  Mon Mar 19 07:15:07 2001
From: uche.ogbuji@fourthought.com (Uche Ogbuji)
Date: Mon, 19 Mar 2001 00:15:07 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation
In-Reply-To: Message from Paul Prescod <paulp@ActiveState.com>
 of "Sat, 17 Mar 2001 17:50:39 PST." <3AB4146E.62AE3299@ActiveState.com>
Message-ID: <200103190715.AAA10076@localhost.localdomain>

> I would call what you need for an efficient XSLT implementation "lazy
> lists." They are never infinite but you would rather not pre-compute
> them in advance. Often you use only the first item. Iterators probably
> would be a good implementation technique.

Well, if you don't want unmanageablecode, you could get the same benefit as 
stackless by iterating rather than recursing throuought an XSLT imlementation. 
 But why not then go farther?  Implement the whole think in raw assembler?

What Stackless would give is a way to keep good, readable execution structured 
without sacrificing performance.

XSLT interpreters are complex beasts, and I can't even imagining replacing 
4XSLT's xsl:call-template dispatch code to be purely iterative.  The result 
would be impenentrable.

But then again, this isn't exactly what you said.  I'm not sure why you think 
lazy lists would make all the difference.  Not so according to my benchmarking.

Aside: XPath node sets are one reason I've been interested in a speed and 
space-efficient set implementation for Python.  However, Guido, Tim are rather 
convincing that this is a fool's errand.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji@fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python




From MarkH@ActiveState.com  Mon Mar 19 09:40:24 2001
From: MarkH@ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 20:40:24 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
Message-ID: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>

I understand the issue of "default Unicode encoding" is a loaded one,
however I believe with the Windows' file system we may be able to use a
default.

Windows provides 2 versions of many functions that accept "strings" - one
that uses "char *" arguments, and another using "wchar *" for Unicode.
Interestingly, the "char *" versions of function almost always support
"mbcs" encoded strings.

To make Python work nicely with the file system, we really should handle
Unicode characters somehow.  It is not too uncommon to find the "program
files" or the "user" directory have Unicode characters in non-english
version of Win2k.

The way I see it, to fix this we have 2 basic choices when a Unicode object
is passed as a filename:
* we call the Unicode versions of the CRTL.
* we auto-encode using the "mbcs" encoding, and still call the non-Unicode
versions of the CRTL.

The first option has a problem in that determining what Unicode support
Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
ascii versions of the functions means that the worst thing that can happen
is we get a regular file-system error if an mbcs encoded string is passed on
a non-Unicode platform.

Does anyone have any objections to this scheme or see any drawbacks in it?
If not, I'll knock up a patch...

Mark.



From mal@lemburg.com  Mon Mar 19 10:09:49 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 11:09:49 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>
Message-ID: <3AB5DAED.F7089741@lemburg.com>

Mark Hammond wrote:
> 
> I understand the issue of "default Unicode encoding" is a loaded one,
> however I believe with the Windows' file system we may be able to use a
> default.
> 
> Windows provides 2 versions of many functions that accept "strings" - one
> that uses "char *" arguments, and another using "wchar *" for Unicode.
> Interestingly, the "char *" versions of function almost always support
> "mbcs" encoded strings.
> 
> To make Python work nicely with the file system, we really should handle
> Unicode characters somehow.  It is not too uncommon to find the "program
> files" or the "user" directory have Unicode characters in non-english
> version of Win2k.
> 
> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.
> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.
> 
> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
> ascii versions of the functions means that the worst thing that can happen
> is we get a regular file-system error if an mbcs encoded string is passed on
> a non-Unicode platform.
> 
> Does anyone have any objections to this scheme or see any drawbacks in it?
> If not, I'll knock up a patch...

Hmm... the problem with MBCS is that it is not one encoding,
but can be many things. I don't know if this is an issue (can there
be more than one encoding per process ? is the encoding a user or
system setting ? does the CRT know which encoding to use/assume ?),
but the Unicode approach sure sounds a lot safer.

Also, what would os.listdir() return ? Unicode strings or 8-bit
strings ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From MarkH@ActiveState.com  Mon Mar 19 10:34:46 2001
From: MarkH@ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 21:34:46 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <3AB5DAED.F7089741@lemburg.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPMEDHDGAA.MarkH@ActiveState.com>

> Hmm... the problem with MBCS is that it is not one encoding,
> but can be many things.

Yeah, but I think specifically with filenames this is OK.  We would be
translating from Unicode objects using MBCS in the knowledge that somewhere
in the Win32 maze they will be converted back to Unicode, using MBCS, to
access the Unicode based filesystem.

At the moment, you just get an exception - the dreaded "ASCII encoding
error: ordinal not in range(128)" :)

I don't see the harm - we are making no assumptions about the user's data,
just about the platform.  Note that I never want to assume a string object
is in a particular encoding - just assume that the CRTL file functions can
handle a particular encoding for their "filename" parameter.  I don't want
to handle Unicode objects in any "data" params, just the "filename".

Mark.



From MarkH@ActiveState.com  Mon Mar 19 10:53:01 2001
From: MarkH@ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 21:53:01 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <3AB5DAED.F7089741@lemburg.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>

Sorry, I notice I didn't answer your specific question:

> Also, what would os.listdir() return ? Unicode strings or 8-bit
> strings ?

This would not change.

This is what my testing shows:

* I can switch to a German locale, and create a file using the keystrokes
"`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
last characters.

* os.listdir() returns '\xe0test\xf2' for this file.

* That same string can be passed to "open" etc to open the file.

* The only way to get that string to a Unicode object is to use the
encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
least it has a hope of handling non-latin characters :)

So - assume I am passed a Unicode object that represents this filename.  At
the moment we simply throw that exception if we pass that Unicode object to
open().  I am proposing that "mbcs" be used in this case instead of the
default "ascii"

If nothing else, my idea could be considered a "short-term" solution.  If
ever it is found to be a problem, we can simply move to the unicode APIs,
and nothing would break - just possibly more things _would_ work :)

Mark.



From mal@lemburg.com  Mon Mar 19 11:17:18 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:17:18 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
Message-ID: <3AB5EABE.CE4C5760@lemburg.com>

Mark Hammond wrote:
> 
> Sorry, I notice I didn't answer your specific question:
> 
> > Also, what would os.listdir() return ? Unicode strings or 8-bit
> > strings ?
> 
> This would not change.
> 
> This is what my testing shows:
> 
> * I can switch to a German locale, and create a file using the keystrokes
> "`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
> last characters.
> 
> * os.listdir() returns '\xe0test\xf2' for this file.
> 
> * That same string can be passed to "open" etc to open the file.
> 
> * The only way to get that string to a Unicode object is to use the
> encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
> least it has a hope of handling non-latin characters :)
> 
> So - assume I am passed a Unicode object that represents this filename.  At
> the moment we simply throw that exception if we pass that Unicode object to
> open().  I am proposing that "mbcs" be used in this case instead of the
> default "ascii"
> 
> If nothing else, my idea could be considered a "short-term" solution.  If
> ever it is found to be a problem, we can simply move to the unicode APIs,
> and nothing would break - just possibly more things _would_ work :)

Sounds like a good idea. We'd only have to assure that whatever
os.listdir() returns can actually be used to open the file, but that
seems to be the case... at least for Latin-1 chars (I wonder how
well this behaves with Japanese chars).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Mar 19 11:34:30 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:34:30 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>
Message-ID: <3AB5EEC6.F5D6FE3B@lemburg.com>

Tim Peters wrote:
> 
> [Alex Martelli]
> > ...
> > There's another free library that interoperates with GMP to remedy
> > this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
> > It's also LGPL.  I haven't looked much into it as it seems it's not been
> > ported to Windows yet (and that looks like quite a project) which is
> > the platform I'm currently using (and, rationals do what I need:-).
> 
> Thanks for the pointer!  From a quick skim, good news & bad news (although
> which is which may depend on POV):
> 
> + The authors apparently believe their MPFR routines "should replace
>   the MPF class in further releases of GMP".  Then somebody else will
>   port them.

...or simply install both packages...
 
> + Allows exact specification of result precision (which will make the
>   results 100% platform-independent, unlike GMP's).

This is a Good Thing :)
 
> + Allows choice of IEEE 754 rounding modes (unlike GMP's truncation).
> 
> + But is still binary floating-point.

:-(
 
> Marc-Andre is especially interested in decimal fixed- and floating-point, and
> even more specifically than that, of a flavor that will be efficient for
> working with decimal types in databases (which I suspect-- but don't
> know --means that I/O (conversion) costs are more important than computation
> speed once converted).  GMP + MPFR don't really address the decimal part of
> that.  Then again, MAL hasn't quantified any of his desires either <wink>; I
> bet he'd be happier with a BCD-ish scheme.

The ideal solution for my needs would be an implementation which
allows:

* fast construction of decimals using string input
* fast decimal string output
* good interaction with the existing Python numeric types

BCD-style or simple decimal string style implementations serve
these requirements best, but GMP or MPFR 
 
> > ...
> > Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
> > MPFR Python wrapper interoperating with GMPY, btw -- it lives at
> > http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
> > I can't run MPFR myself, as above explained).
> 
> OK, that amounts to ~200 lines of C code to wrap some of the MPFR functions
> (exp, log, sqrt, sincos, agm, log2, pi, pow; many remain to be wrapped; and
> they don't allow specifying precision yet).  So Pearu still has significant
> work to do here, while MAL is wondering who in their right mind would want to
> do *anything* with numbers except add them <wink>.

Right: as long as there is a possibility to convert these decimals to 
Python floats or integers (or longs) I don't really care ;)

Seriously, I think that the GMP lib + MPFR lib provide a very
good basis to do work with numbers on Unix. Unfortunately, they
don't look very portable (given all that assembler code in there
and the very Unix-centric build system).

Perhaps we'd need a higher level interface to all of this which
can then take GMP or some home-grown "port" of the Python long
implementation to base-10 as backend to do the actual work.

It would have to provide these types:
 Integer - arbitrary precision integers
 Rational - dito for rational numbers
 Float - dito for floating point numbers

Integration with Python is easy given the new coercion mechanism
at C level. The problem I see is how to define coercion order, i.e.
Integer + Rational should produce a Rational, but what about
Rational + Float or Float + Python float or Integer + Python float ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Mon Mar 19 11:38:31 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:38:31 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>
Message-ID: <3AB5EFB7.2E2AAED0@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Looking around some more on the web, I found that the GNU MP (GMP)
> > lib has switched from being GPLed to LGPLed,
> 
> Right.
> 
> > meaning that it can actually be used by non-GPLed code as long as
> > the source code for the GMP remains publically accessible.
> 
> Ask Stallman <0.9 wink>.
> 
> > ...
> > Since the GMP offers arbitrary precision numbers and also has
> > a rational number implementation I wonder if we could use it
> > in Python to support fractions and arbitrary precision
> > floating points ?!
> 
> Note that Alex Martelli runs the "General Multiprecision Python" project on
> SourceForge:
> 
>     http://gmpy.sourceforge.net/
> 
> He had a severe need for fast rational arithmetic in his Python programs, so
> starting wrapping the full GMP out of necessity.

I found that link after hacking away at yet another GMP
wrapper for three hours Friday night... turned out to be a nice
proof of concept, but also showed some issues with respect to
coercion (see my other reply).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From gherman@darwin.in-berlin.de  Mon Mar 19 11:57:49 2001
From: gherman@darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 12:57:49 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
Message-ID: <3AB5F43D.E33B188D@darwin.in-berlin.de>

I wrote on comp.lang.python today:
> 
> is there a simple way (or any way at all) to find out for 
> any given hard disk how much free space is left on that
> device? I looked into the os module, but either not hard
> enough or there is no such function. Of course, the ideal
> solution would be platform-independant, too... :)

Is there any good reason for not having a cross-platform
solution to this? I'm certainly not the first to ask for
such a function and it certainly exists for all platforms,
doesn't it?

Unfortunately, OS problems like that make it rather impossi-
ble to write truly cross-platform applications in Python, 
even if it is touted to be exactly that.

I know that OS differ in the services they provide, but in
this case it seems to me that each one *must* have such a 
function, so I don't understand why it's not there...

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")


From thomas@xs4all.net  Mon Mar 19 12:07:13 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:07:13 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F43D.E33B188D@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 12:57:49PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de>
Message-ID: <20010319130713.M29286@xs4all.nl>

On Mon, Mar 19, 2001 at 12:57:49PM +0100, Dinu Gherman wrote:
> I wrote on comp.lang.python today:
> > is there a simple way (or any way at all) to find out for 
> > any given hard disk how much free space is left on that
> > device? I looked into the os module, but either not hard
> > enough or there is no such function. Of course, the ideal
> > solution would be platform-independant, too... :)

> Is there any good reason for not having a cross-platform
> solution to this? I'm certainly not the first to ask for
> such a function and it certainly exists for all platforms,
> doesn't it?

I think the main reason such a function does not exist is that no-one wrote
it. If you can write a portable function, or fake one by making different
implementations on different platforms, please contribute ;) Step one is
making an inventory of the available functions, though, so you know how
large an intersection you have to work with. The fact that you have to start
that study is probably the #1 reason no-one's done it yet :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From nhodgson@bigpond.net.au  Mon Mar 19 12:06:40 2001
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Mon, 19 Mar 2001 23:06:40 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>
Message-ID: <09c001c0b06d$0f359eb0$8119fea9@neil>

Mark Hammond:

> To make Python work nicely with the file system, we really
> should handle Unicode characters somehow.  It is not too
> uncommon to find the "program files" or the "user" directory
> have Unicode characters in non-english version of Win2k.

   The "program files" and "user" directory should still have names
representable in the normal locale used by the user so they are able to
access them by using their standard encoding in a Python narrow character
string to the open function.

> The way I see it, to fix this we have 2 basic choices when a Unicode
object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.

   This is by far the better approach IMO as it is more general and will
work for people who switch locales or who want to access files created by
others using other locales. Although you can always use the horrid mangled
"*~1" names.

> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.

   This will improve things but to a lesser extent than the above. May be
the best possible on 95.

> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.

    None of the *W file calls are listed as supported by 95 although Unicode
file names can certainly be used on FAT partitions.

> * I can switch to a German locale, and create a file using the
> keystrokes "`atest`o".  The "`" is the dead-char so I get an
> umlaut over the first and last characters.

   Its more fun playing with a non-roman locale, and one that doesn't fit in
the normal Windows code page for this sort of problem. Russian is reasonably
readable for us English speakers.

M.-A. Lemburg:
> I don't know if this is an issue (can there
> be more than one encoding per process ?

   There is an input locale and keyboard layout per thread.

> is the encoding a user or system setting ?

   There are system defaults and a menu through which you can change the
locale whenever you want.

> Also, what would os.listdir() return ? Unicode strings or 8-bit
> strings ?

   There is the Windows approach of having an os.listdirW() ;) .

   Neil





From thomas@xs4all.net  Mon Mar 19 12:13:26 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:13:26 +0100
Subject: [Python-Dev] Makefile woos..
In-Reply-To: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>; from dkwolfe@pacbell.net on Sun, Mar 18, 2001 at 09:57:53PM -0800
References: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>
Message-ID: <20010319131325.N29286@xs4all.nl>

On Sun, Mar 18, 2001 at 09:57:53PM -0800, Dan Wolfe wrote:

> Question 1: I'm not geeky enough to understand why the '.3' get's 
> removed.... is there a problem with the SED script? or did I overlook 
> something?
> Question 2: I noticed that all the other versions are 
> <OS><MajorRevision> also - is this intentional? or is this just a result 
> of the bug in the SED script

I believe it's intentional. I'm pretty sure it'll break stuff if it's
changed, in any case. It relies on the convention that the OS release
numbers actually mean something: nothing serious changes when the minor
version number is upped, so there is no need to have a separate architecture
directory for it.

> If someone can help me understand what's going on here, I'll be glad to 
> submit the patch to fix the fcntl module and a few others on Mac OS X.

Are you sure the 'darwin1' arch name is really the problem ? As long as you
have that directory, which should be filled by 'make Lib/plat-darwin1' and
by 'make install' (but not by 'make test', unfortunately) it shouldn't
matter.

(So my guess is: you're doing configure, make, make test, and the
plat-darwin1 directory isn't made then, so tests that rely (indirectly) on
it will fail. Try using 'make plat-darwin1' before 'make test'.)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gherman@darwin.in-berlin.de  Mon Mar 19 12:21:44 2001
From: gherman@darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 13:21:44 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl>
Message-ID: <3AB5F9D8.74F0B55F@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> I think the main reason such a function does not exist is that no-one wrote
> it. If you can write a portable function, or fake one by making different
> implementations on different platforms, please contribute ;) Step one is
> making an inventory of the available functions, though, so you know how
> large an intersection you have to work with. The fact that you have to start
> that study is probably the #1 reason no-one's done it yet :)

Well, this is the usual "If you need it, do it yourself!"
answer, that bites the one who dares to speak up for all
those hundreds who don't... isn't it?

Rather than asking one non-expert in N-1 +/- 1 operating
systems to implement it, why not ask N experts in imple-
menting Python on 1 platform to do the job? (Notice the
potential for parallelism?! :)

Uhmm, seriously, does it really take 10 years for such an 
issue to creep up high enough on the priority ladder of 
Python-Labs? 

In any case it doesn't sound like a Python 3000 feature to 
me, or maybe it should?

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")


From mal@lemburg.com  Mon Mar 19 12:34:45 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 13:34:45 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <3AB5FCE5.92A133AB@lemburg.com>

Dinu Gherman wrote:
> 
> Thomas Wouters wrote:
> >
> > I think the main reason such a function does not exist is that no-one wrote
> > it. If you can write a portable function, or fake one by making different
> > implementations on different platforms, please contribute ;) Step one is
> > making an inventory of the available functions, though, so you know how
> > large an intersection you have to work with. The fact that you have to start
> > that study is probably the #1 reason no-one's done it yet :)
> 
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?
> 
> Rather than asking one non-expert in N-1 +/- 1 operating
> systems to implement it, why not ask N experts in imple-
> menting Python on 1 platform to do the job? (Notice the
> potential for parallelism?! :)

I think the problem with this one really is the differences
in OS designs, e.g. on Windows you have the concept of drive
letters where on Unix you have mounted file systems. Then there
also is the concept of disk space quota per user which would
have to be considered too.

Also, calculating the available disk space may return false
results (e.g. for Samba shares).

Perhaps what we really need is some kind of probing function
which tests whether a certain amount of disk space would be
available ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Mon Mar 19 12:43:23 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:43:23 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F9D8.74F0B55F@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 01:21:44PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <20010319134323.W27808@xs4all.nl>

On Mon, Mar 19, 2001 at 01:21:44PM +0100, Dinu Gherman wrote:
> Thomas Wouters wrote:
> > 
> > I think the main reason such a function does not exist is that no-one wrote
> > it. If you can write a portable function, or fake one by making different
> > implementations on different platforms, please contribute ;) Step one is
> > making an inventory of the available functions, though, so you know how
> > large an intersection you have to work with. The fact that you have to start
> > that study is probably the #1 reason no-one's done it yet :)
> 
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?
> 
> Rather than asking one non-expert in N-1 +/- 1 operating
> systems to implement it, why not ask N experts in imple-
> menting Python on 1 platform to do the job? (Notice the
> potential for parallelism?! :)
> 
> Uhmm, seriously, does it really take 10 years for such an 
> issue to creep up high enough on the priority ladder of 
> Python-Labs? 

> In any case it doesn't sound like a Python 3000 feature to 
> me, or maybe it should?

Nope. But you seem to misunderstand the idea behind Python development (and
most of open-source development.) PythonLabs has a *lot* of stuff they have
to do, and you cannot expect them to do everything. Truth is, this is not
likely to be done by Pythonlabs, and it will never be done unless someone
does it. It might sound harsh and unfriendly, but it's just a fact. It
doesn't mean *you* have to do it, but that *someone* has to do it. Feel free
to find someone to do it :)

As for the parallelism: that means getting even more people to volunteer for
the task. And the person(s) doing it still have to figure out the common
denominators in 'get me free disk space info'.

And the fact that it's *been* 10 years shows that noone cares enough about
the free disk space issue to actually get people to code it. 10 years filled
with a fair share of C programmers starting to use Python, so plenty of
those people could've done it :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@digicool.com  Mon Mar 19 12:57:09 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 07:57:09 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Mon, 19 Mar 2001 00:26:27 EST."
 <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>
Message-ID: <200103191257.HAA25649@cj20424-a.reston1.va.home.com>

Is there any point still copying this thread to both
python-dev@python.org and python-numerics@lists.sourceforge.net?

It's best to move it to the latter, I "pronounce". :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From gherman@darwin.in-berlin.de  Mon Mar 19 12:58:48 2001
From: gherman@darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 13:58:48 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl>
Message-ID: <3AB60288.2915DF32@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> Nope. But you seem to misunderstand the idea behind Python development (and
> most of open-source development.) 

Not sure what makes you think that, but anyway.

> PythonLabs has a *lot* of stuff they have
> to do, and you cannot expect them to do everything. Truth is, this is not
> likely to be done by Pythonlabs, and it will never be done unless someone
> does it.

Apparently, I agree, I know less about what makes truth here. 
What is probably valid is that having much to do is true for 
everybody and not much of an argument, is it?

> As for the parallelism: that means getting even more people to volunteer for
> the task. And the person(s) doing it still have to figure out the common
> denominators in 'get me free disk space info'.

I'm afraid this is like argueing in circles.

> And the fact that it's *been* 10 years shows that noone cares enough about
> the free disk space issue to actually get people to code it. 10 years filled
> with a fair share of C programmers starting to use Python, so plenty of
> those people could've done it :)

I'm afraid, again, but the impression you have of nobody in ten
years asking for this function is just that, an impression, 
unless *somebody* prooves the contrary. 

All I can say is that I'm writing an app that I want to be 
cross-platform and that Python does not allow it to be just 
that, while Google gives you 17400 hits if you look for 
"python cross-platform". Now, this is also some kind of 
*truth* if only one of a mismatch between reality and wish-
ful thinking...

Regards,

Dinu


From guido@digicool.com  Mon Mar 19 13:00:44 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 08:00:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: Your message of "Mon, 19 Mar 2001 15:02:55 +1200."
 <200103190302.PAA06055@s454.cosc.canterbury.ac.nz>
References: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz>
Message-ID: <200103191300.IAA25681@cj20424-a.reston1.va.home.com>

> Christian Tismer <tismer@tismer.com>:
> 
> > But stopping the interpreter is a perfect unwind, and we
> > can start again from anywhere.
> 
> Hmmm... Let me see if I have this correct.
> 
> You can switch from uthread A to uthread B as long
> as the current depth of interpreter nesting is the
> same as it was when B was last suspended. It doesn't
> matter if the interpreter has returned and then
> been called again, as long as it's at the same
> level of nesting on the C stack.
> 
> Is that right? Is that the only restriction?

I doubt it.  To me (without a lot of context, but knowing ceval.c :-)
it would make more sense if the requirement was that there were no C
stack frames involved in B -- only Python frames.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mal@lemburg.com  Mon Mar 19 13:07:25 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 14:07:25 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <3AB5FCE5.92A133AB@lemburg.com> <3AB5FFB8.E138160A@darwin.in-berlin.de>
Message-ID: <3AB6048D.4E24AC4F@lemburg.com>

Dinu Gherman wrote:
> 
> "M.-A. Lemburg" wrote:
> >
> > I think the problem with this one really is the differences
> > in OS designs, e.g. on Windows you have the concept of drive
> > letters where on Unix you have mounted file systems. Then there
> > also is the concept of disk space quota per user which would
> > have to be considered too.
> 
> I'd be perfectly happy with something like this:
> 
>   import os
>   free = os.getfreespace('c:\\')          # on Win
>   free = os.getfreespace('/hd5')          # on Unix-like boxes
>   free = os.getfreespace('Mactintosh HD') # on Macs
>   free = os.getfreespace('ZIP-1')         # on Macs, Win, ...
> 
> etc. where the string passed is, a-priori, a name known
> by the OS for some permanent or removable drive. Network
> drives might be slightly more tricky, but probably not
> entirely impossible, I guess.

This sounds like a lot of different platform C APIs would need
to be wrapped first, e.g. quotactrl, getrlimit (already done)
+ a bunch of others since "get free space" is usually a file system
dependent call.

I guess we should take a look at how "df" does this on Unix
and maybe trick Mark Hammond into looking up the win32 API ;-)

> > Perhaps what we really need is some kind of probing function
> > which tests whether a certain amount of disk space would be
> > available ?!
> 
> Something like incrementally stuffing it with junk data until
> you get an exception, right? :)

Yep. Actually opening a file in record mode and then using
file.seek() should work on many platforms.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From fredrik@pythonware.com  Mon Mar 19 13:04:59 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Mon, 19 Mar 2001 14:04:59 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <029401c0b075$3c18e2e0$0900a8c0@SPIFF>

dinu wrote:
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?

fwiw, Python already supports this for real Unix platforms:

>>> os.statvfs("/")    
(8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)

here, the root disk holds 524288x512 bytes, with 348336x512
bytes free for the current user, and 365788x512 bytes available
for root.

(the statvfs module contains indices for accessing this "struct")

Implementing a small subset of statvfs for Windows wouldn't
be that hard (possibly returning None for fields that don't make
sense, or are too hard to figure out).

(and with win32all, I'm sure it can be done without any C code).

Cheers /F



From guido@digicool.com  Mon Mar 19 13:12:58 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 08:12:58 -0500
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: Your message of "Mon, 19 Mar 2001 21:53:01 +1100."
 <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
References: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
Message-ID: <200103191312.IAA25747@cj20424-a.reston1.va.home.com>

> > Also, what would os.listdir() return ? Unicode strings or 8-bit
> > strings ?
> 
> This would not change.
> 
> This is what my testing shows:
> 
> * I can switch to a German locale, and create a file using the keystrokes
> "`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
> last characters.

(Actually, grave accents, but I'm sure that to Aussie eyes, as to
Americans, they's all Greek. :-)

> * os.listdir() returns '\xe0test\xf2' for this file.

I don't understand.  This is a Latin-1 string.  Can you explain again
how the MBCS encoding encodes characters outside the Latin-1 range?

> * That same string can be passed to "open" etc to open the file.
> 
> * The only way to get that string to a Unicode object is to use the
> encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
> least it has a hope of handling non-latin characters :)
> 
> So - assume I am passed a Unicode object that represents this filename.  At
> the moment we simply throw that exception if we pass that Unicode object to
> open().  I am proposing that "mbcs" be used in this case instead of the
> default "ascii"
> 
> If nothing else, my idea could be considered a "short-term" solution.  If
> ever it is found to be a problem, we can simply move to the unicode APIs,
> and nothing would break - just possibly more things _would_ work :)

I have one more question.  The plan looks decent, but I don't know the
scope.  Which calls do you plan to fix?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From thomas@xs4all.net  Mon Mar 19 13:18:34 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 14:18:34 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB60288.2915DF32@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 01:58:48PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl> <3AB60288.2915DF32@darwin.in-berlin.de>
Message-ID: <20010319141834.X27808@xs4all.nl>

On Mon, Mar 19, 2001 at 01:58:48PM +0100, Dinu Gherman wrote:

> All I can say is that I'm writing an app that I want to be 
> cross-platform and that Python does not allow it to be just 
> that, while Google gives you 17400 hits if you look for 
> "python cross-platform". Now, this is also some kind of 
> *truth* if only one of a mismatch between reality and wish-
> ful thinking...

I'm sure I agree, but I don't see the value in dropping everything to write
a function so Python can be that much more cross-platform. (That's just me,
though.) Python wouldn't *be* as cross-platform as it is now if not for a
group of people who weren't satisfied with it, and improved on it. And a lot
of those people were not Guido or even of the current PythonLabs team.

I've never really believed in the 'true cross-platform nature' of Python,
mostly because I know it can't *really* be true. Most of my scripts are not
portably to non-UNIX platforms, due to the use of sockets, pipes, and
hardcoded filepaths (/usr/...). Even if I did, I can hardly agree that
because there is no portable way (if any at all) to find out howmany
diskspace is free, it isn't cross-platform. Just *because* it lacks that
function makes it more cross-platform: platforms might not have the concept
of 'free space' :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From gherman@darwin.in-berlin.de  Mon Mar 19 13:23:51 2001
From: gherman@darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 14:23:51 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl> <3AB60288.2915DF32@darwin.in-berlin.de> <20010319141834.X27808@xs4all.nl>
Message-ID: <3AB60867.3D2A9DF@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> I've never really believed in the 'true cross-platform nature' of Python,
> mostly because I know it can't *really* be true. Most of my scripts are not
> portably to non-UNIX platforms, due to the use of sockets, pipes, and
> hardcoded filepaths (/usr/...). Even if I did, I can hardly agree that
> because there is no portable way (if any at all) to find out howmany
> diskspace is free, it isn't cross-platform. Just *because* it lacks that
> function makes it more cross-platform: platforms might not have the concept
> of 'free space' :)

Hmm, that means we better strip the standard library off
most of its modules (why not all?), because the less 
content there is, the more cross-platform it will be, 
right?

Well, if the concept is not there, simply throw a neat 
ConceptException! ;-)

Dinu


From gherman@darwin.in-berlin.de  Mon Mar 19 13:32:17 2001
From: gherman@darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 14:32:17 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
Message-ID: <3AB60A61.A4BB2768@darwin.in-berlin.de>

Fredrik Lundh wrote:
> 
> fwiw, Python already supports this for real Unix platforms:
> 
> >>> os.statvfs("/")
> (8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)
> 
> here, the root disk holds 524288x512 bytes, with 348336x512
> bytes free for the current user, and 365788x512 bytes available
> for root.
> 
> (the statvfs module contains indices for accessing this "struct")
> 
> Implementing a small subset of statvfs for Windows wouldn't
> be that hard (possibly returning None for fields that don't make
> sense, or are too hard to figure out).
> 
> (and with win32all, I'm sure it can be done without any C code).
> 
> Cheers /F

Everything correct! 

I'm just trying to make the point that from a user perspective 
it would be more complete to have such a function in the os 
module (where it belongs), that would also work on Macs e.g., 
as well as more conveniant, because even when that existed in 
modules like win32api (where it does) and in one of the (many) 
mac* ones (which I don't know yet if it does) it would save 
you the if-statement on sys.platform.

It sounds silly to me if people now pushed into learning Py-
thon as a first programming language had to use such state-
ments to get along, but were given the 'gift' of 1/2 = 0.5
which we seem to spend an increasing amount of brain cycles
on...

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")


From Greg.Wilson@baltimore.com  Mon Mar 19 13:32:21 2001
From: Greg.Wilson@baltimore.com (Greg Wilson)
Date: Mon, 19 Mar 2001 08:32:21 -0500
Subject: [Python-Dev] BOOST Python library
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>

Might be of interest to people binding C++ to Python...

http://www.boost.org/libs/python/doc/index.html

Greg

By the way, http://mail.python.org/pipermail/python-list/
now seems to include archives for February 2005.  Is this
another "future" import?




From tismer@tismer.com  Mon Mar 19 13:46:19 2001
From: tismer@tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 14:46:19 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz> <200103191300.IAA25681@cj20424-a.reston1.va.home.com>
Message-ID: <3AB60DAB.D92D12BF@tismer.com>


Guido van Rossum wrote:
> 
> > Christian Tismer <tismer@tismer.com>:
> >
> > > But stopping the interpreter is a perfect unwind, and we
> > > can start again from anywhere.
> >
> > Hmmm... Let me see if I have this correct.
> >
> > You can switch from uthread A to uthread B as long
> > as the current depth of interpreter nesting is the
> > same as it was when B was last suspended. It doesn't
> > matter if the interpreter has returned and then
> > been called again, as long as it's at the same
> > level of nesting on the C stack.
> >
> > Is that right? Is that the only restriction?
> 
> I doubt it.  To me (without a lot of context, but knowing ceval.c :-)
> it would make more sense if the requirement was that there were no C
> stack frames involved in B -- only Python frames.

Right. And that is only a dynamic restriction. It does not
matter how and where frames were created, it is just impossible
to jump at a frame that is held by an interpreter on the C stack.
The key to circumvent this (and the advantage of uthreads) is
to not enforce a jump from a nested interpreter, but to initiate
that it will happen. That is, the scheduling interpreter
does the switch, not the nested one.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From fredrik@pythonware.com  Mon Mar 19 13:54:03 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Mon, 19 Mar 2001 14:54:03 +0100
Subject: [Python-Dev] BOOST Python library
References: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>
Message-ID: <02ba01c0b07c$0ff8c9d0$0900a8c0@SPIFF>

greg wrote:
> By the way, http://mail.python.org/pipermail/python-list/
> now seems to include archives for February 2005.  Is this
> another "future" import?

did you read the post?



From gmcm@hypernet.com  Mon Mar 19 14:27:04 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Mon, 19 Mar 2001 09:27:04 -0500
Subject: [Python-Dev] Function in os module for available disk space, why  not?
In-Reply-To: <3AB60A61.A4BB2768@darwin.in-berlin.de>
Message-ID: <3AB5D0E8.16418.990252B8@localhost>

Dinu Gherman wrote:

[disk free space...]
> I'm just trying to make the point that from a user perspective it
> would be more complete to have such a function in the os module
> (where it belongs), that would also work on Macs e.g., as well as
> more conveniant, because even when that existed in modules like
> win32api (where it does) and in one of the (many) mac* ones
> (which I don't know yet if it does) it would save you the
> if-statement on sys.platform.

Considering that:
 - it's not uncommon to map things into the filesystem's 
namespace for which "free space" is meaningless
 - for network mapped storage space it's quite likely you can't 
get a meaningful number
 - for compressed file systems the number will be inaccurate
 - even if you get an accurate answer, the space may not be 
there when you go to use it (so need try... except anyway)

I find it perfectly sensible that Python does not dignify this 
mess with an official function.

- Gordon


From guido@digicool.com  Mon Mar 19 14:58:29 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 09:58:29 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: Your message of "Mon, 19 Mar 2001 14:32:17 +0100."
 <3AB60A61.A4BB2768@darwin.in-berlin.de>
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
 <3AB60A61.A4BB2768@darwin.in-berlin.de>
Message-ID: <200103191458.JAA26035@cj20424-a.reston1.va.home.com>

> I'm just trying to make the point that from a user perspective 
> it would be more complete to have such a function in the os 
> module (where it belongs), that would also work on Macs e.g., 
> as well as more conveniant, because even when that existed in 
> modules like win32api (where it does) and in one of the (many) 
> mac* ones (which I don't know yet if it does) it would save 
> you the if-statement on sys.platform.

Yeah, yeah, yeah.  Whine, whine, whine.  As has been made abundantly
clear, doing this cross-platform requires a lot of detailed platform
knowledge.  We at PythonLabs don't have all the wisdom, and we often
rely on outsiders to help us out.  Until now, finding out how much
free space there is on a disk hasn't been requested much (in fact I
don't recall seeing a request for it before).  That's why it isn't
already there -- that plus the fact that traditionally on Unix this
isn't easy to find out (statvfs didn't exist when I wrote most of the
posix module).  I'm not against adding it, but I'm not particularly
motivated to add it myself because I have too much to do already (and
the same's true for all of us here at PythonLabs).

> It sounds silly to me if people now pushed into learning Py-
> thon as a first programming language had to use such state-
> ments to get along, but were given the 'gift' of 1/2 = 0.5
> which we seem to spend an increasing amount of brain cycles
> on...

I would hope that you agree with me though that the behavior of
numbers is a lot more fundamental to education than finding out
available disk space.  The latter is just a system call of use to a
small number of professionals.  The former has usability implications
for all Python users.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From gherman@darwin.in-berlin.de  Mon Mar 19 15:32:51 2001
From: gherman@darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 16:32:51 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5D0E8.16418.990252B8@localhost>
Message-ID: <3AB626A3.CA4B6174@darwin.in-berlin.de>

Gordon McMillan wrote:
> 
> Considering that:
>  - it's not uncommon to map things into the filesystem's
>    namespace for which "free space" is meaningless

Unless I'm totally stupid, I see the concept of "free space" as
being tied to the *device*, not to anything being mapped to it 
or not.

>  - for network mapped storage space it's quite likely you can't
>    get a meaningful number

Fine, then let's play the exception blues...

>  - for compressed file systems the number will be inaccurate

Then why is the OS function call there...? And: nobody can *seri-
ously* expect an accurate figure of the remaining space for com-
pressed file systems, anyway, and I think nobody does! But there
will always be some number >= 0 of uncompressed available bytes 
left.

>  - even if you get an accurate answer, the space may not be
>    there when you go to use it (so need try... except anyway)

The same holds for open(path, 'w') - and still this function is 
considered useful, isn't it?!

> I find it perfectly sensible that Python does not dignify this
> mess with an official function.

Well, I have yet to see a good argument against this...

Regards,

Dinu


From mal@lemburg.com  Mon Mar 19 15:46:34 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 16:46:34 +0100
Subject: [Python-Dev] BOOST Python library
References: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>
Message-ID: <3AB629DA.52C72E57@lemburg.com>

Greg Wilson wrote:
> 
> Might be of interest to people binding C++ to Python...
> 
> http://www.boost.org/libs/python/doc/index.html

Could someone please add links to all the tools they mention
in their comparison to the c++-sig page (not even SWIG is mentioned
there).

  http://www.boost.org/libs/python/doc/comparisons.html

BTW, most SIG have long expired... I guess bumbing the year from
2000 to 2002 would help ;-)

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From tismer@tismer.com  Mon Mar 19 15:49:37 2001
From: tismer@tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 16:49:37 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com>
Message-ID: <3AB62A91.1DBE7F8B@tismer.com>


Neil Schemenauer wrote:
> 
> I've got a different implementation.  There are no new keywords
> and its simpler to wrap a high level interface around the low
> interface.
> 
>     http://arctrix.com/nas/python/generator2.diff
> 
> What the patch does:
> 
>     Split the big for loop and switch statement out of eval_code2
>     into PyEval_EvalFrame.
> 
>     Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
>     WHY_RETURN except that the frame value stack and the block stack
>     are not touched.  The frame is also marked resumable before
>     returning (f_stackbottom != NULL).
> 
>     Add two new methods to frame objects, suspend and resume.
>     suspend takes one argument which gets attached to the frame
>     (f_suspendvalue).  This tells ceval to suspend as soon as control
>     gets back to this frame.  resume, strangely enough, resumes a
>     suspended frame.  Execution continues at the point it was
>     suspended.  This is done by calling PyEval_EvalFrame on the frame
>     object.
> 
>     Make frame_dealloc clean up the stack and decref f_suspendvalue
>     if it exists.
> 
> There are probably still bugs and it slows down ceval too much
> but otherwise things are looking good.  Here are some examples
> (the're a little long and but illustrative).  Low level
> interface, similar to my last example:

I've had a closer look at your patch (without actually applying
and running it) and it looks good to me.
A possible bug may be in frame_resume, where you are doing
+       f->f_back = tstate->frame;
without taking care of the prior value of f_back.

There is a little problem with your approach, which I have
to mention: I believe, without further patching it will be
easy to crash Python.
By giving frames the suspend and resume methods, you are
opening frames to everybody in a way that allows to treat
them as kind of callable objects. This is the same problem
that Stackless had imposed.
By doing so, it might be possible to call any frame, also
if it is currently run by a nested interpreter.

I see two solutions to get out of this:

1) introduce a lock flag for frames which are currently
   executed by some interpreter on the C stack. This is
   what Stackless does currently.
   Maybe you can just use your new f_suspendvalue field.
   frame_resume must check that this value is not NULL
   on entry, and set it zero before resuming.
   See below for more.

2) Do not expose the resume and suspend methods to the
   Python user, and recode Generator.py as an extension
   module in C. This should prevent abuse of frames.

Proposal for a different interface:
I would change the interface of PyEval_EvalFrame
to accept a return value passed in, like Stackless
has its "passed_retval", and maybe another variable
that explicitly tells the kind of the frame call,
i.e. passing the desired why_code. This also would
make it easier to cope with the other needs of Stackless
later in a cleaner way.
Well, I see you are clearing the f_suspendvalue later.
Maybe just adding the why_code to the parameters
would do. f_suspendvalue can be used for different
things, it can also become the place to store a return
value, or a coroutine transfer parameter.

In the future, there will not obly be the suspend/resume
interface. Frames will be called for different reasons:
suspend  with a value  (generators)
return   with a value  (normal function calls)
transfer with a value  (coroutines)
transfer with no value (microthreads)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From moshez@zadka.site.co.il  Mon Mar 19 16:00:01 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 19 Mar 2001 18:00:01 +0200
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F43D.E33B188D@darwin.in-berlin.de>
References: <3AB5F43D.E33B188D@darwin.in-berlin.de>
Message-ID: <E14f24f-0004ny-00@darjeeling>

On Mon, 19 Mar 2001 12:57:49 +0100, Dinu Gherman <gherman@darwin.in-berlin.de> wrote:
> I wrote on comp.lang.python today:
> > 
> > is there a simple way (or any way at all) to find out for 
> > any given hard disk how much free space is left on that
> > device? I looked into the os module, but either not hard
> > enough or there is no such function. Of course, the ideal
> > solution would be platform-independant, too... :)
> 
> Is there any good reason for not having a cross-platform
> solution to this? I'm certainly not the first to ask for
> such a function and it certainly exists for all platforms,
> doesn't it?

No, it doesn't.
Specifically, the information is always unreliable, especially
when you start considering NFS mounted directories and things
like that.

> I know that OS differ in the services they provide, but in
> this case it seems to me that each one *must* have such a 
> function

This doesn't have a *meaning* in UNIX. (In the sense that I can
think of so many special cases, that having a half-working implementation
is worse then nothing)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From gherman@darwin.in-berlin.de  Mon Mar 19 16:06:27 2001
From: gherman@darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 17:06:27 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
 <3AB60A61.A4BB2768@darwin.in-berlin.de> <200103191458.JAA26035@cj20424-a.reston1.va.home.com>
Message-ID: <3AB62E83.ACBDEB3@darwin.in-berlin.de>

Guido van Rossum wrote:
> 
> Yeah, yeah, yeah.  Whine, whine, whine. [...]
> I'm not against adding it, but I'm not particularly motivated 
> to add it myself [...]

Good! After doing a quick research on Google it turns out 
this function is also available on MacOS, as expected, named 
PBHGetVInfo(). See this page for details plus a sample Pascal 
function using it:

  http://developer.apple.com/techpubs/mac/Files/Files-96.html

I'm not sure what else is needed to use it, but at least it's
there and maybe somebody more of a Mac expert than I am could
help out here... I'm going to continue this on c.l.p. in the
original thread... Hey, maybe it is already available in one
of the many mac packages. Well, I'll start some digging...

> I would hope that you agree with me though that the behavior of
> numbers is a lot more fundamental to education than finding out
> available disk space.  The latter is just a system call of use 
> to a small number of professionals.  The former has usability 
> implications for all Python users.

I do agree, sort of, but it appears that often there is much 
more work being spent on fantastic new features, where im-
proving existing ones would also be very beneficial. For me
at least, there is considerable value in a system's consisten-
cy and completeness and not only in its number of features.

Thanks everybody (now that Guido has spoken we have to finish)! 
It was fun! :)

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")


From guido@digicool.com  Mon Mar 19 16:32:33 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 11:32:33 -0500
Subject: [Python-Dev] Python T-shirts
Message-ID: <200103191632.LAA26632@cj20424-a.reston1.va.home.com>

At the conference we handed out T-shirts with the slogan on the back
"Python: programming the way Guido indented it".  We've been asked if
there are any left.  Well, we gave them all away, but we're ordering
more.  You can get them for $10 + S+H.  Write to Melissa Light
<melissa@digicool.com>.  Be nice to her!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From nas@arctrix.com  Mon Mar 19 16:45:35 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 08:45:35 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB62A91.1DBE7F8B@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 04:49:37PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com>
Message-ID: <20010319084534.A18938@glacier.fnational.com>

On Mon, Mar 19, 2001 at 04:49:37PM +0100, Christian Tismer wrote:
> A possible bug may be in frame_resume, where you are doing
> +       f->f_back = tstate->frame;
> without taking care of the prior value of f_back.

Good catch.  There is also a bug when f_suspendvalue is being set
(Py_XDECREF should be called first).

[Christian on disallowing resume on frame already running]
> 1) introduce a lock flag for frames which are currently
>    executed by some interpreter on the C stack. This is
>    what Stackless does currently.
>    Maybe you can just use your new f_suspendvalue field.
>    frame_resume must check that this value is not NULL
>    on entry, and set it zero before resuming.

Another good catch.  It would be easy to set f_stackbottom to
NULL at the top of PyEval_EvalFrame.  resume already checks this
to decide if the frame is resumable.

> 2) Do not expose the resume and suspend methods to the
>    Python user, and recode Generator.py as an extension
>    module in C. This should prevent abuse of frames.

I like the frame methods.  However, this may be a good idea since
Jython may implement things quite differently.

> Proposal for a different interface:
> I would change the interface of PyEval_EvalFrame
> to accept a return value passed in, like Stackless
> has its "passed_retval", and maybe another variable
> that explicitly tells the kind of the frame call,
> i.e. passing the desired why_code. This also would
> make it easier to cope with the other needs of Stackless
> later in a cleaner way.
> Well, I see you are clearing the f_suspendvalue later.
> Maybe just adding the why_code to the parameters
> would do. f_suspendvalue can be used for different
> things, it can also become the place to store a return
> value, or a coroutine transfer parameter.
> 
> In the future, there will not obly be the suspend/resume
> interface. Frames will be called for different reasons:
> suspend  with a value  (generators)
> return   with a value  (normal function calls)
> transfer with a value  (coroutines)
> transfer with no value (microthreads)

The interface needs some work and I'm happy to change it to
better accommodate stackless.  f_suspendvalue and f_stackbottom
are pretty ugly, IMO.  One unexpected benefit: with
PyEval_EvalFrame split out of eval_code2 the interpreter is 5%
faster on my machine.  I suspect the compiler has an easier time
optimizing the loop in the smaller function.

BTW, where is this stackless light patch I've been hearing about?
I would be interested to look at it.  Thanks for your comments.

  Neil


From tismer@tismer.com  Mon Mar 19 16:58:46 2001
From: tismer@tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 17:58:46 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com>
Message-ID: <3AB63AC6.4799C73@tismer.com>


Neil Schemenauer wrote:
...
> > 2) Do not expose the resume and suspend methods to the
> >    Python user, and recode Generator.py as an extension
> >    module in C. This should prevent abuse of frames.
> 
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

Maybe a good reason. Exposing frame methods is nice
to play with. Finally, you will want the hard coded
generators. The same thing is happening with Stackless
now. I have a different spelling for frames :-) but
they have to vanish now.

[immature pre-pre-pre-interface]
> The interface needs some work and I'm happy to change it to
> better accommodate stackless.  f_suspendvalue and f_stackbottom
> are pretty ugly, IMO.  One unexpected benefit: with
> PyEval_EvalFrame split out of eval_code2 the interpreter is 5%
> faster on my machine.  I suspect the compiler has an easier time
> optimizing the loop in the smaller function.

Really!? I thought you told about a speed loss?

> BTW, where is this stackless light patch I've been hearing about?
> I would be interested to look at it.  Thanks for your comments.

It does not exist at all. It is just an idea, and
were are looking for somebody who can implement it.
At the moment, we have a PEP (thanks to Gordon), but
there is no specification of StackLite.
I believe PEPs are a good idea.
In this special case, I'd recomment to try to write
a StackLite, and then write the PEP :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From mal@lemburg.com  Mon Mar 19 16:07:10 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 17:07:10 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
Message-ID: <3AB62EAE.FCFD7C9F@lemburg.com>

Fredrik Lundh wrote:
> 
> dinu wrote:
> > Well, this is the usual "If you need it, do it yourself!"
> > answer, that bites the one who dares to speak up for all
> > those hundreds who don't... isn't it?
> 
> fwiw, Python already supports this for real Unix platforms:
> 
> >>> os.statvfs("/")
> (8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)
> 
> here, the root disk holds 524288x512 bytes, with 348336x512
> bytes free for the current user, and 365788x512 bytes available
> for root.
> 
> (the statvfs module contains indices for accessing this "struct")
> 
> Implementing a small subset of statvfs for Windows wouldn't
> be that hard (possibly returning None for fields that don't make
> sense, or are too hard to figure out).
> 
> (and with win32all, I'm sure it can be done without any C code).

It seems that all we need is Jack to port this to the Mac
and we have a working API here :-)

Let's do it...

import sys,os

try:
    os.statvfs

except KeyError:
    # Win32 implementation...
    # Mac implementation...
    pass

else:
    import statvfs
    
    def freespace(path):
        """ freespace(path) -> integer
        Return the number of bytes available to the user on the file system
        pointed to by path."""
        s = os.statvfs(path)
        return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

if __name__=='__main__':
    path = sys.argv[1]
    print 'Free space on %s: %i kB (%i bytes)' % (path,
                                                  freespace(path) / 1024,
                                                  freespace(path))

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From Samuele Pedroni <pedroni@inf.ethz.ch>  Mon Mar 19 17:08:41 2001
From: Samuele Pedroni <pedroni@inf.ethz.ch> (Samuele Pedroni)
Date: Mon, 19 Mar 2001 18:08:41 +0100 (MET)
Subject: [Python-Dev] Simple generators, round 2
Message-ID: <200103191708.SAA09258@core.inf.ethz.ch>

Hi.

> > 2) Do not expose the resume and suspend methods to the
> >    Python user, and recode Generator.py as an extension
> >    module in C. This should prevent abuse of frames.
> 
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

I should repeat this: (if we want to avoid threads for implementing
generators because for them that's really an overkill, especially
if those are used in tight loops): jython codebase have following 
limitations:

- suspensions point should be known at compilation time
 (we produce jvm bytecode, that should be instrumented
  to allow restart at a given point). The only other solution
  is to compile a method with a big switch that have a case
  for every python line, which is quite expensive.
  
- a suspension point can at most do a return, it cannot go up 
  more than a single frame even if it just want to discard them.
  Maybe there is a workaroung to this using exceptions, but they
  are expensive and again an overkill for a tight loop.

=> we can support  something like a supsend keyword. The rest is pain :-( .

regards.



From nas@arctrix.com  Mon Mar 19 17:21:59 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:21:59 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB63AC6.4799C73@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 05:58:46PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com>
Message-ID: <20010319092159.B19071@glacier.fnational.com>

[Neil]
> One unexpected benefit: with PyEval_EvalFrame split out of
> eval_code2 the interpreter is 5% faster on my machine.  I
> suspect the compiler has an easier time optimizing the loop in
> the smaller function.

[Christian]
> Really!? I thought you told about a speed loss?

You must be referring to an earlier post I made.  That was purely
speculation.  I didn't time things until the weekend.  Also, the
5% speedup is base on the refactoring of eval_code2 with the
added generator bits.  I wouldn't put much weight on the apparent
speedup either.  Its probably slower on other platforms.

  Neil


From tismer@tismer.com  Mon Mar 19 17:25:43 2001
From: tismer@tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 18:25:43 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com>
Message-ID: <3AB64117.8D3AEBED@tismer.com>


Neil Schemenauer wrote:
> 
> [Neil]
> > One unexpected benefit: with PyEval_EvalFrame split out of
> > eval_code2 the interpreter is 5% faster on my machine.  I
> > suspect the compiler has an easier time optimizing the loop in
> > the smaller function.
> 
> [Christian]
> > Really!? I thought you told about a speed loss?
> 
> You must be referring to an earlier post I made.  That was purely
> speculation.  I didn't time things until the weekend.  Also, the
> 5% speedup is base on the refactoring of eval_code2 with the
> added generator bits.  I wouldn't put much weight on the apparent
> speedup either.  Its probably slower on other platforms.

Nevermind. I believe this is going to be the best possible
efficient implementation of generators.
And I'm very confident that it will make it into the
core with ease and without the need for a PEP.

congrats - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From nas@arctrix.com  Mon Mar 19 17:27:33 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:27:33 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010319092159.B19071@glacier.fnational.com>; from nas@arctrix.com on Mon, Mar 19, 2001 at 09:21:59AM -0800
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com>
Message-ID: <20010319092733.C19071@glacier.fnational.com>

On Mon, Mar 19, 2001 at 09:21:59AM -0800, Neil Schemenauer wrote:
> Also, the 5% speedup is base on the refactoring of eval_code2
> with the added generator bits.

Ugh, that should say "based on the refactoring of eval_code2
WITHOUT the generator bits".

  engage-fingers-before-brain-ly y'rs Neil



From nas@arctrix.com  Mon Mar 19 17:38:44 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:38:44 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB64117.8D3AEBED@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 06:25:43PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com> <3AB64117.8D3AEBED@tismer.com>
Message-ID: <20010319093844.D19071@glacier.fnational.com>

On Mon, Mar 19, 2001 at 06:25:43PM +0100, Christian Tismer wrote:
> I believe this is going to be the best possible efficient
> implementation of generators.  And I'm very confident that it
> will make it into the core with ease and without the need for a
> PEP.

I sure hope not.  We need to come up with better APIs and a
better interface from Python code.  The current interface is not
efficiently implementable in Jython, AFAIK.  We also need to
figure out how to make things play nicely with stackless.  IMHO,
a PEP is required.

My plan now is to look at how stackless works as I now understand
some of the issues.  Since no stackless light patch exists
writing one may be a good learning project.  Its still a long
road to 2.2. :-)

  Neil


From tismer@tismer.com  Mon Mar 19 17:43:20 2001
From: tismer@tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 18:43:20 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com> <3AB64117.8D3AEBED@tismer.com> <20010319093844.D19071@glacier.fnational.com>
Message-ID: <3AB64538.15522433@tismer.com>


Neil Schemenauer wrote:
> 
> On Mon, Mar 19, 2001 at 06:25:43PM +0100, Christian Tismer wrote:
> > I believe this is going to be the best possible efficient
> > implementation of generators.  And I'm very confident that it
> > will make it into the core with ease and without the need for a
> > PEP.
> 
> I sure hope not.  We need to come up with better APIs and a
> better interface from Python code.  The current interface is not
> efficiently implementable in Jython, AFAIK.  We also need to
> figure out how to make things play nicely with stackless.  IMHO,
> a PEP is required.

Yes, sure. What I meant was not the current code, but the
simplistic, straightforward approach.

> My plan now is to look at how stackless works as I now understand
> some of the issues.  Since no stackless light patch exists
> writing one may be a good learning project.  Its still a long
> road to 2.2. :-)

Warning, *unreadable* code. If you really want to read that,
make sure to use ceval_pre.c, this comes almost without optimization.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From paulp@ActiveState.com  Mon Mar 19 17:55:36 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 19 Mar 2001 09:55:36 -0800
Subject: [Python-Dev] nondist/sandbox/typecheck
Message-ID: <3AB64818.DA458342@ActiveState.com>

Could I check in some type-checking code into nondist/sandbox? It's
quickly getting to the point where real users can start to see benefits
from it and I would like to let people play with it to convince
themselves of that.

Consider these mistaken statements:

os.path.abspath(None)
xmllib.XMLParser().feed(None)
sre.compile(".*", "I")

Here's what we used to get as tracebacks:

	os.path.abspath(None)
	(no error, any falsible value is treated as the same as the empty
string!)

	xmllib.XMLParser().feed(None)

Traceback (most recent call last):
  File "errors.py", line 8, in ?
    xmllib.XMLParser().feed(None)
  File "c:\python20\lib\xmllib.py", line 164, in feed
    self.rawdata = self.rawdata + data
TypeError: cannot add type "None" to string

	sre.compile(".*", "I")

Traceback (most recent call last):
  File "errors.py", line 12, in ?
    sre.compile(".*", "I")
  File "c:\python20\lib\sre.py", line 62, in compile
    return _compile(pattern, flags)
  File "c:\python20\lib\sre.py", line 100, in _compile
    p = sre_compile.compile(pattern, flags)
  File "c:\python20\lib\sre_compile.py", line 359, in compile
    p = sre_parse.parse(p, flags)
  File "c:\python20\lib\sre_parse.py", line 586, in parse
    p = _parse_sub(source, pattern, 0)
  File "c:\python20\lib\sre_parse.py", line 294, in _parse_sub
    items.append(_parse(source, state))
  File "c:\python20\lib\sre_parse.py", line 357, in _parse
    if state.flags & SRE_FLAG_VERBOSE:
TypeError: bad operand type(s) for &

====================

Here's what we get now:

	os.path.abspath(None)

Traceback (most recent call last):
  File "errors.py", line 4, in ?
    os.path.abspath(None)
  File "ntpath.py", line 401, in abspath
    def abspath(path):
InterfaceError: Parameter 'path' expected Unicode or 8-bit string.
Instead it got 'None' (None)

	xmllib.XMLParser().feed(None)

Traceback (most recent call last):
  File "errors.py", line 8, in ?
    xmllib.XMLParser().feed(None)
  File "xmllib.py", line 163, in feed
    def feed(self, data):
InterfaceError: Parameter 'data' expected Unicode or 8-bit string.
Instead it got 'None' (None)

	sre.compile(".*", "I")

Traceback (most recent call last):
  File "errors.py", line 12, in ?
    sre.compile(".*", "I")
  File "sre.py", line 61, in compile
    def compile(pattern, flags=0):
InterfaceError: Parameter 'flags' expected None.
Instead it got 'string' ('I')
None

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From ping@lfw.org  Mon Mar 19 21:07:10 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Mon, 19 Mar 2001 13:07:10 -0800 (PST)
Subject: [Python-Dev] Nested scopes core dump
Message-ID: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>

I just tried this:

    Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> from __future__ import nested_scopes
    >>> def f(x):
    ...     x = x + 1
    ...     a = x + 3
    ...     b = x + 5
    ...     def g(y):
    ...         def h(z):
    ...             return a, b, x, y, z
    ...         return h
    ...     return g
    ...
    Fatal Python error: non-string found in code slot
    Aborted (core dumped)

gdb says v is NULL:

    #5  0x8059cce in PyCode_New (argcount=1, nlocals=2, stacksize=5, flags=3, code=0x8144688, consts=0x8145c1c, names=0x8122974, varnames=0x8145c6c, freevars=0x80ecc14, cellvars=0x81225d4, filename=0x812f900, name=0x810c288, firstlineno=5, lnotab=0x8144af0) at Python/compile.c:279
    279             intern_strings(freevars);
    (gdb) down
    #4  0x8059b80 in intern_strings (tuple=0x80ecc14) at Python/compile.c:233
    233                             Py_FatalError("non-string found in code slot");
    (gdb) list 230
    225     static int
    226     intern_strings(PyObject *tuple)
    227     {
    228             int i;
    229
    230             for (i = PyTuple_GET_SIZE(tuple); --i >= 0; ) {
    231                     PyObject *v = PyTuple_GET_ITEM(tuple, i);
    232                     if (v == NULL || !PyString_Check(v)) {
    233                             Py_FatalError("non-string found in code slot");
    234                             PyErr_BadInternalCall();
    (gdb) print v
    $1 = (PyObject *) 0x0

Hope this helps (this test should probably be added to test_scope.py too),


-- ?!ng

Happiness comes more from loving than being loved; and often when our
affection seems wounded it is is only our vanity bleeding. To love, and
to be hurt often, and to love again--this is the brave and happy life.
    -- J. E. Buchrose 



From jeremy@alum.mit.edu  Mon Mar 19 21:09:30 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 16:09:30 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
Message-ID: <15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>

Please submit bug reports as SF bug reports.  (Thanks for finding it,
but if I don't get to it today this email does me little good.)

Jeremy


From MarkH@ActiveState.com  Mon Mar 19 21:53:29 2001
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 20 Mar 2001 08:53:29 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <09c001c0b06d$0f359eb0$8119fea9@neil>
Message-ID: <LCEPIIGDJPKCOIHOBJEPMEEJDGAA.MarkH@ActiveState.com>

Hi Neil!

>    The "program files" and "user" directory should still have names

"should" or "will"?

> representable in the normal locale used by the user so they are able to
> access them by using their standard encoding in a Python narrow character
> string to the open function.

I dont understand what "their standard encoding" is here.  My understanding
is that "their standard encoding" is whatever WideCharToMultiByte() returns,
and this is what mbcs is.

My understanding is that their "default encoding" will bear no relationship
to encoding names as known by Python.  ie, given a user's locale, there is
no reasonable way to determine which of the Python encoding names will
always correctly work on these strings.

> > The way I see it, to fix this we have 2 basic choices when a Unicode
> object
> > is passed as a filename:
> > * we call the Unicode versions of the CRTL.
>
>    This is by far the better approach IMO as it is more general and will
> work for people who switch locales or who want to access files created by
> others using other locales. Although you can always use the horrid mangled
> "*~1" names.
>
> > * we auto-encode using the "mbcs" encoding, and still call the
> non-Unicode
> > versions of the CRTL.
>
>    This will improve things but to a lesser extent than the above. May be
> the best possible on 95.

I understand the above, but want to resist having different NT and 9x
versions of Python for obvious reasons.  I also wanted to avoid determining
at runtime if the platform has Unicode support and magically switching to
them.

I concur on the "may be the best possible on 95" and see no real downsides
on NT, other than the freak possibility of the default encoding being change
_between_ us encoding a string and the OS decoding it.

Recall that my change is only to convert from Unicode to a string so the
file system can convert back to Unicode.  There is no real opportunity for
the current locale to change on this thread during this process.

I guess I see 3 options:

1) Do nothing, thereby forcing the user to manually encode the Unicode
object.  Only by encoding the string can they access these filenames, which
means the exact same issues apply.

2) Move to Unicode APIs where available, which will be a much deeper patch
and much harder to get right on non-Unicode Windows platforms.

3) Like 1, but simply automate the encoding task.

My proposal was to do (3).  It is not clear from your mail what you propose.
Like me, you seem to agree (2) would be perfect in an ideal world, but you
also agree we don't live in one.

What is your recommendation?

Mark.



From skip@pobox.com (Skip Montanaro)  Mon Mar 19 21:53:56 2001
From: skip@pobox.com (Skip Montanaro) (Skip Montanaro)
Date: Mon, 19 Mar 2001 15:53:56 -0600 (CST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
 <15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15030.32756.969347.565911@beluga.mojam.com>

    Jeremy> Please submit bug reports as SF bug reports.  (Thanks for
    Jeremy> finding it, but if I don't get to it today this email does me
    Jeremy> little good.)

What?  You actually delete email?  Or do you have an email system that works
like Usenet? 

;-)

S




From nhodgson@bigpond.net.au  Mon Mar 19 22:52:34 2001
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Tue, 20 Mar 2001 09:52:34 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPMEEJDGAA.MarkH@ActiveState.com>
Message-ID: <02e401c0b0c7$4a38a2a0$8119fea9@neil>

   Morning Mark,


> >    The "program files" and "user" directory should still have names
>
> "should" or "will"?

   Should. I originally wrote "will" but then thought of the scenario where
I install W2K with Russian as the default locale. The "Program Files"
directory (and other standard directories) is created with a localised name
(call it, "Russian PF" for now) including some characters not representable
in Latin 1. I then start working with a Python program and decide to change
the input locale to German. The "Russian PF" string is representable in
Unicode but not in the code page used for German so a WideCharToMultiByte
using the current code page will fail. Fail here means not that the function
will error but that a string will be constructed which will not round trip
back to Unicode and thus is unlikely to be usable to open the file.

> > representable in the normal locale used by the user so they are able to
> > access them by using their standard encoding in a Python narrow
character
> > string to the open function.
>
> I dont understand what "their standard encoding" is here.  My
understanding
> is that "their standard encoding" is whatever WideCharToMultiByte()
returns,
> and this is what mbcs is.

    WideCharToMultiByte has an explicit code page parameter so its the
caller that has to know what they want. The most common thing to do is ask
the system for the input locale and use this in the call to
WideCharToMultiByte and there are some CRT functions like wcstombs that wrap
this. Passing CP_THREAD_ACP to WideCharToMultiByte is another way. Scintilla
uses:

static int InputCodePage() {
 HKL inputLocale = ::GetKeyboardLayout(0);
 LANGID inputLang = LOWORD(inputLocale);
 char sCodePage[10];
 int res = ::GetLocaleInfo(MAKELCID(inputLang, SORT_DEFAULT),
   LOCALE_IDEFAULTANSICODEPAGE, sCodePage, sizeof(sCodePage));
 if (!res)
  return 0;
 return atoi(sCodePage);
}

   which is the result of reading various articles from MSDN and MSJ.
microsoft.public.win32.programmer.international is the news group for this
and Michael Kaplan answers a lot of these sorts of questions.

> My understanding is that their "default encoding" will bear no
relationship
> to encoding names as known by Python.  ie, given a user's locale, there is
> no reasonable way to determine which of the Python encoding names will
> always correctly work on these strings.

   Uncertain. There should be a way to get the input locale as a Python
encoding name or working on these sorts of issues will be difficult.

> Recall that my change is only to convert from Unicode to a string so the
> file system can convert back to Unicode.  There is no real opportunity for
> the current locale to change on this thread during this process.

   But the Unicode string may be non-representable using the current locale.
So doing the conversion makes the string unusable.

> My proposal was to do (3).  It is not clear from your mail what you
propose.
> Like me, you seem to agree (2) would be perfect in an ideal world, but you
> also agree we don't live in one.

   I'd prefer (2). Support Unicode well on the platforms that support it
well. Providing some help on 95 is nice but not IMO as important.

   Neil




From mwh21@cam.ac.uk  Mon Mar 19 23:14:08 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 19 Mar 2001 23:14:08 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Ka-Ping Yee's message of "Mon, 19 Mar 2001 13:07:10 -0800 (PST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
Message-ID: <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>

Ka-Ping Yee <ping@lfw.org> writes:

> I just tried this:
> 
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> from __future__ import nested_scopes
>     >>> def f(x):
>     ...     x = x + 1
>     ...     a = x + 3
>     ...     b = x + 5
>     ...     def g(y):
>     ...         def h(z):
>     ...             return a, b, x, y, z
>     ...         return h
>     ...     return g
>     ...
>     Fatal Python error: non-string found in code slot
>     Aborted (core dumped)

Here, look at this:

static int
symtable_freevar_offsets(PyObject *freevars, int offset)
{
      PyObject *name, *v;
      int pos;

      /* The cell vars are the first elements of the closure,
         followed by the free vars.  Update the offsets in
         c_freevars to account for number of cellvars. */  
      pos = 0;
      while (PyDict_Next(freevars, &pos, &name, &v)) {
              int i = PyInt_AS_LONG(v) + offset;
              PyObject *o = PyInt_FromLong(i);
              if (o == NULL)
                      return -1;
              if (PyDict_SetItem(freevars, name, o) < 0) {
                      Py_DECREF(o);
                      return -1;
              }
              Py_DECREF(o);
      }
      return 0;
}

this modifies the dictionary you're iterating over.  This is, as they
say, a Bad Idea[*].

https://sourceforge.net/tracker/index.php?func=detail&aid=409864&group_id=5470&atid=305470

is a minimal-effort/impact fix.  I don't know the new compile.c well
enough to really judge the best fix.

Cheers,
M.

[*] I thought that if you used the same keys when you were iterating
    over a dict you were safe.  It seems not, at least as far as I
    could tell with mounds of debugging printf's.
-- 
  (Of course SML does have its weaknesses, but by comparison, a
  discussion of C++'s strengths and flaws always sounds like an
  argument about whether one should face north or east when one
  is sacrificing one's goat to the rain god.)         -- Thant Tessman



From jeremy@alum.mit.edu  Mon Mar 19 23:17:30 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 18:17:30 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
 <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MWH" == Michael Hudson <mwh21@cam.ac.uk> writes:

  MWH> [*] I thought that if you used the same keys when you were
  MWH> iterating over a dict you were safe.  It seems not, at least as
  MWH> far as I could tell with mounds of debugging printf's.

I did, too.  Anyone know what the problems is?  

Jeremy


From martin@loewis.home.cs.tu-berlin.de  Mon Mar 19 23:16:34 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 20 Mar 2001 00:16:34 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
Message-ID: <200103192316.f2JNGYK02041@mira.informatik.hu-berlin.de>

> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
>
> * we call the Unicode versions of the CRTL.

That is the choice that I prefer. I understand that it won't work on
Win95, but I think that needs to be worked-around.

By using "Unicode versions" of an API, you are making the code
Windows-specific anyway. So I wonder whether it might be better to use
the plain API instead of the CRTL; I also wonder how difficult it
actually is to do "the right thing all the time".

On NT, the file system is defined in terms of Unicode, so passing
Unicode in and out is definitely the right thing (*). On Win9x, the
file system uses some platform specific encoding, which means that
using that encoding is the right thing. On Unix, there is no
established convention, but UTF-8 was invented exactly to deal with
Unicode in Unix file systems, so that might be appropriate choice
(**).

So I'm in favour of supporting Unicode on all file system APIs; that
does include os.listdir(). For 2.1, that may be a bit much given that
a beta release has already been seen; so only accepting Unicode on
input is what we can do now.

Regards,
Martin

(*) Converting to the current MBCS might be lossy, and it might not
support all file names. The "ASCII only" approach of 2.0 was precisely
taken to allow getting it right later; I strongly discourage any
approach that attempts to drop the restriction in a way that does not
allow to get it right later.

(**) Atleast, that is the best bet. Many Unix installations use some
other encoding in their file names; if Unicode becomes more common,
most likely installations will also use UTF-8 on their file systems.
Unless it can be established what the file system encoding is,
returning Unicode from os.listdir is probably not the right thing.


From mwh21@cam.ac.uk  Mon Mar 19 23:44:11 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 19 Mar 2001 23:44:11 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Jeremy Hylton's message of "Mon, 19 Mar 2001 18:17:30 -0500 (EST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>

Jeremy Hylton <jeremy@alum.mit.edu> writes:

> >>>>> "MWH" == Michael Hudson <mwh21@cam.ac.uk> writes:
> 
>   MWH> [*] I thought that if you used the same keys when you were
>   MWH> iterating over a dict you were safe.  It seems not, at least as
>   MWH> far as I could tell with mounds of debugging printf's.
> 
> I did, too.  Anyone know what the problems is?  

The dict's resizing, it turns out.

I note that in PyDict_SetItem, the check to see if the dict needs
resizing occurs *before* it is known whether the key is already in the
dict.  But if this is the problem, how come we haven't been bitten by
this before?

Cheers,
M.

-- 
  While preceding your entrance with a grenade is a good tactic in
  Quake, it can lead to problems if attempted at work.    -- C Hacking
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html



From jeremy@alum.mit.edu  Mon Mar 19 23:48:42 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 18:48:42 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
 <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
 <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>
 <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MH" == Michael Hudson <mwh21@cam.ac.uk> writes:

  MH> Jeremy Hylton <jeremy@alum.mit.edu> writes:
  >> >>>>> "MWH" == Michael Hudson <mwh21@cam.ac.uk> writes:
  >>
  MWH> [*] I thought that if you used the same keys when you were
  MWH> iterating over a dict you were safe.  It seems not, at least as
  MWH> far as I could tell with mounds of debugging printf's.
  >>
  >> I did, too.  Anyone know what the problems is?

  MH> The dict's resizing, it turns out.

So a hack to make the iteration safe would be to assign and element
and then delete it?

  MH> I note that in PyDict_SetItem, the check to see if the dict
  MH> needs resizing occurs *before* it is known whether the key is
  MH> already in the dict.  But if this is the problem, how come we
  MH> haven't been bitten by this before?

It's probably unusual for a dictionary to be in this state when the
compiler decides to update the values.

Jeremy


From MarkH@ActiveState.com  Mon Mar 19 23:57:21 2001
From: MarkH@ActiveState.com (Mark Hammond)
Date: Tue, 20 Mar 2001 10:57:21 +1100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
In-Reply-To: <200103192316.f2JNGYK02041@mira.informatik.hu-berlin.de>
Message-ID: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>

OK - it appears everyone agrees we should go the "Unicode API" route.  I
actually thought my scheme did not preclude moving to this later.

This is a much bigger can of worms than I have bandwidth to take on at the
moment.  As Martin mentions, what will os.listdir() return on Win9x vs
Win2k?  What does passing a Unicode object to a non-Unicode Win32 platform
mean? etc.  How do Win95/98/ME differ in their Unicode support?  Do the
various service packs for each of these change the basic support?

So unfortunately this simply means the status quo remains until someone
_does_ have the time and inclination.  That may well be me in the future,
but is not now.  It also means that until then, Python programmers will
struggle with this and determine that they can make it work simply by
encoding the Unicode as an "mbcs" string.  Or worse, they will note that
"latin1 seems to work" and use that even though it will work "less often"
than mbcs.  I was simply hoping to automate that encoding using a scheme
that works "most often".

The biggest drawback is that by doing nothing we are _encouraging_ the user
to write broken code.  The way things stand at the moment, the users will
_never_ pass Unicode objects to these APIs (as they dont work) and will
therefore manually encode a string.  To my mind this is _worse_ than what my
scheme proposes - at least my scheme allows Unicode objects to be passed to
the Python functions - python may choose to change the way it handles these
in the future.  But by forcing the user to encode a string we have lost
_all_ meaningful information about the Unicode object and can only hope they
got the encoding right.

If anyone else decides to take this on, please let me know.  However, I fear
that in a couple of years we may still be waiting and in the meantime people
will be coding hacks that will _not_ work in the new scheme.

c'est-la-vie-ly,

Mark.



From mwh21@cam.ac.uk  Tue Mar 20 00:02:59 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 00:02:59 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Jeremy Hylton's message of "Mon, 19 Mar 2001 18:48:42 -0500 (EST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk>

Jeremy Hylton <jeremy@alum.mit.edu> writes:

> >>>>> "MH" == Michael Hudson <mwh21@cam.ac.uk> writes:
> 
>   MH> Jeremy Hylton <jeremy@alum.mit.edu> writes:
>   >> >>>>> "MWH" == Michael Hudson <mwh21@cam.ac.uk> writes:
>   >>
>   MWH> [*] I thought that if you used the same keys when you were
>   MWH> iterating over a dict you were safe.  It seems not, at least as
>   MWH> far as I could tell with mounds of debugging printf's.
>   >>
>   >> I did, too.  Anyone know what the problems is?
> 
>   MH> The dict's resizing, it turns out.
> 
> So a hack to make the iteration safe would be to assign and element
> and then delete it?

Yes.  This would be gross beyond belief though.  Particularly as the
normal case is for freevars to be empty.

>   MH> I note that in PyDict_SetItem, the check to see if the dict
>   MH> needs resizing occurs *before* it is known whether the key is
>   MH> already in the dict.  But if this is the problem, how come we
>   MH> haven't been bitten by this before?
> 
> It's probably unusual for a dictionary to be in this state when the
> compiler decides to update the values.

What I meant was that there are bits and pieces of code in the Python
core that blithely pass keys gotten from PyDict_Next into
PyDict_SetItem.  From what I've just learnt, I'd expect this to
occasionally cause glitches of extreme confusing-ness.  Though on
investigation, I don't think any of these bits of code are sensitive
to getting keys out multiple times (which is what happens in this case
- though you must be able to miss keys too).  Might cause the odd leak
here and there.

Cheers,
M.

-- 
  Clue: You've got the appropriate amount of hostility for the
  Monastery, however you are metaphorically getting out of the
  safari jeep and kicking the lions.                         -- coonec
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html



From greg@cosc.canterbury.ac.nz  Tue Mar 20 00:19:35 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 20 Mar 2001 12:19:35 +1200 (NZST)
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5FCE5.92A133AB@lemburg.com>
Message-ID: <200103200019.MAA06253@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal@lemburg.com>:

> Actually opening a file in record mode and then using
> file.seek() should work on many platforms.

Not on Unix! No space is actually allocated until you
write something, regardless of where you seek to. And
then only the blocks that you touch (files can have
holes in them).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Tue Mar 20 00:21:47 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 20 Mar 2001 12:21:47 +1200 (NZST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB60DAB.D92D12BF@tismer.com>
Message-ID: <200103200021.MAA06256@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer@tismer.com>:

> It does not
> matter how and where frames were created, it is just impossible
> to jump at a frame that is held by an interpreter on the C stack.

I think I need a clearer idea of what it means for a frame
to be "held by an interpreter".

I gather that each frame has a lock flag. How and when does
this flag get set and cleared?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim.one@home.com  Tue Mar 20 01:48:27 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 19 Mar 2001 20:48:27 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <20010319141834.X27808@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMHJGAA.tim.one@home.com>

Here's a radical suggestion:  Start a x-platform project on SourceForge,
devoted to producing a C library with a common interface for
platform-dependent crud like "how big is this file?" and "how many bytes free
on this disk?" and "how can I execute a shell command in a portable way?"
(e.g., Tcl's "exec" emulates a subset of Bourne shell syntax, including
redirection and pipes, even on Windows 3.1).

OK, that's too useful.  Nevermind ...



From tismer@tismer.com  Tue Mar 20 05:15:01 2001
From: tismer@tismer.com (Christian Tismer)
Date: Tue, 20 Mar 2001 06:15:01 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103200021.MAA06256@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB6E755.B39C2E62@tismer.com>


Greg Ewing wrote:
> 
> Christian Tismer <tismer@tismer.com>:
> 
> > It does not
> > matter how and where frames were created, it is just impossible
> > to jump at a frame that is held by an interpreter on the C stack.
> 
> I think I need a clearer idea of what it means for a frame
> to be "held by an interpreter".
> 
> I gather that each frame has a lock flag. How and when does
> this flag get set and cleared?

Assume a frame F being executed by an interpreter A.
Now, if this frame calls a function, which in turn
starts another interpreter B, this hides interpreter
A on the C stack. Frame F cannot be run by anything
until interpreter B is finished.
Exactly in this situation, frame F has its lock set,
to prevend crashes.
Such a locked frame cannot be a switch target.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From barry@digicool.com  Tue Mar 20 05:12:17 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Tue, 20 Mar 2001 00:12:17 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>
Message-ID: <15030.59057.866982.538935@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido@python.org> writes:

    GvR> So I see little chance for PEP 224.  Maybe I should just
    GvR> pronounce on this, and declare the PEP rejected.

So, was that a BDFL pronouncement or not? :)

-Barry


From tim_one@email.msn.com  Tue Mar 20 05:57:23 2001
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 20 Mar 2001 00:57:23 -0500
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <200103191312.IAA25747@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGENHJGAA.tim_one@email.msn.com>

[Mark Hammond]
> * os.listdir() returns '\xe0test\xf2' for this file.

[Guido]
> I don't understand.  This is a Latin-1 string.  Can you explain again
> how the MBCS encoding encodes characters outside the Latin-1 range?

I expect this is a coincidence.  MBCS is a generic term for a large number of
distinct variable-length encoding schemes, one or more specific to each
language.  Latin-1 is a subset of some MBCS schemes, but not of others; Mark
was using a German mblocale, right?  Across MS's set of MBCS schemes, there's
little consistency:  a one-byte encoding in one of them may well be a "lead
byte" (== the first byte of a two-byte encoding) in another.

All this stuff is hidden under layers of macros so general that, if you code
it right, you can switch between compiling MBCS code on Win95 and Unicode
code on NT via setting one compiler #define.  Or that's what they advertise.
The multi-lingual Windows app developers at my previous employer were all
bald despite being no older than 23 <wink>.

ascii-boy-ly y'rs  - tim



From tim_one@email.msn.com  Tue Mar 20 06:31:49 2001
From: tim_one@email.msn.com (Tim Peters)
Date: Tue, 20 Mar 2001 01:31:49 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010319084534.A18938@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>

[Neil Schemenauer]
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

Note that the "compare fringes of two trees" example is a classic not because
it's inherently interesting, but because it distills the essence of a
particular *class* of problem (that's why it's popular with academics).

In Icon you need to create co-expressions to solve this problem, because its
generators aren't explicitly resumable, and Icon has no way to spell "kick a
pair of generators in lockstep".  But explicitly resumable generators are in
fact "good enough" for this classic example, which is usually used to
motivate coroutines.

I expect this relates to the XLST/XSLT/whatever-the-heck-it-was example:  if
Paul thought iterators were the bee's knees there, I *bet* in glorious
ignorance that iterators implemented via Icon-style generators would be the
bee's pajamas.

Of course Christian is right that you have to prevent a suspended frame from
getting activated more than once simultaneously; but that's detectable, and
should be considered a programmer error if it happens.



From fredrik@pythonware.com  Tue Mar 20 07:00:51 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 08:00:51 +0100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>
Message-ID: <003a01c0b10b$80e6a650$e46940d5@hagrid>

Mark Hammond wrote:
> OK - it appears everyone agrees we should go the "Unicode API" route.

well, I'd rather play with a minimal (mbcs) patch now, than wait another
year or so for a full unicodification, so if you have the time...

Cheers /F



From tim.one@home.com  Tue Mar 20 07:08:53 2001
From: tim.one@home.com (Tim Peters)
Date: Tue, 20 Mar 2001 02:08:53 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation
In-Reply-To: <200103190709.AAA10053@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCMENKJGAA.tim.one@home.com>

[Uche Ogbuji]
> Quite interesting.  I brought up this *exact* point at the
> Stackless BOF at IPC9.  I mentioned that the immediate reason
> I was interested in Stackless was to supercharge the efficiency
> of 4XSLT.  I think that a stackless 4XSLT could pretty much
> annihilate the other processors in the field for performance.

Hmm.  I'm interested in clarifying the cost/performance boundaries of the
various approaches.  I don't understand XSLT (I don't even know what it is).
Do you grok the difference between full-blown Stackless and Icon-style
generators?  The correspondent I quoted believed the latter were on-target
for XSLT work, and given the way Python works today generators are easier to
implement than full-blown Stackless.  But while I can speak with some
confidence about the latter, I don't know whether they're sufficient for what
you have in mind.

If this is some flavor of one-at-time tree-traversal algorithm, generators
should suffice.

class TreeNode:
    # with self.value
    #      self.children, a list of TreeNode objects
    ...
    def generate_kids(self):  # pre-order traversal
        suspend self.value
        for kid in self.children:
            for itskids in kid.generate_kids():
                suspend itskids

for k in someTreeNodeObject.generate_kids():
    print k

So the control-flow is thoroughly natural, but you can only suspend to your
immediate invoker (in recursive traversals, this "walks up the chain" of
generators for each result).  With explicitly resumable generator objects,
multiple trees (or even general graphs -- doesn't much matter) can be
traversed in lockstep (or any other interleaving that's desired).

Now decide <wink>.




From fredrik@pythonware.com  Tue Mar 20 07:36:59 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 08:36:59 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <LNBBLJKPBEHFEDALKOLCMEMHJGAA.tim.one@home.com>
Message-ID: <017a01c0b110$8d132890$e46940d5@hagrid>

tim wrote:
> Here's a radical suggestion:  Start a x-platform project on SourceForge,
> devoted to producing a C library with a common interface for
> platform-dependent crud like "how big is this file?" and "how many bytes free
> on this disk?" and "how can I execute a shell command in a portable way?"
> (e.g., Tcl's "exec" emulates a subset of Bourne shell syntax, including
> redirection and pipes, even on Windows 3.1).

counter-suggestion:

add partial os.statvfs emulation to the posix module for Windows
(and Mac), and write helpers for shutil to do the fancy stuff you
mentioned before.

Cheers /F



From tim.one@home.com  Tue Mar 20 08:30:18 2001
From: tim.one@home.com (Tim Peters)
Date: Tue, 20 Mar 2001 03:30:18 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <017a01c0b110$8d132890$e46940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com>

[Fredrik Lundh]
> counter-suggestion:
>
> add partial os.statvfs emulation to the posix module for Windows
> (and Mac), and write helpers for shutil to do the fancy stuff you
> mentioned before.

One of the best things Python ever did was to introduce os.path.getsize() +
friends, saving the bulk of the world from needing to wrestle with the
obscure Unix stat() API.  os.chmod() is another x-platform teachability pain;
if there's anything worth knowing in the bowels of statvfs(), let's please
spell it in a human-friendly way from the start.



From fredrik@effbot.org  Tue Mar 20 08:58:53 2001
From: fredrik@effbot.org (Fredrik Lundh)
Date: Tue, 20 Mar 2001 09:58:53 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com>
Message-ID: <01ec01c0b11b$ff9593c0$e46940d5@hagrid>

Tim Peters wrote:
> One of the best things Python ever did was to introduce os.path.getsize() +
> friends, saving the bulk of the world from needing to wrestle with the
> obscure Unix stat() API.

yup (I remember lobbying for those years ago), but that doesn't
mean that we cannot make already existing low-level APIs work
on as many platforms as possible...

(just like os.popen etc)

adding os.statvfs for windows is pretty much a bug fix (for 2.1?),
but adding a new API is not (2.2).

> os.chmod() is another x-platform teachability pain

shutil.chmod("file", "g+x"), anyone?

> if there's anything worth knowing in the bowels of statvfs(), let's
> please spell it in a human-friendly way from the start.

how about os.path.getfreespace("path") and
os.path.gettotalspace("path") ?

Cheers /F



From fredrik@pythonware.com  Tue Mar 20 12:07:23 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 13:07:23 +0100
Subject: [Python-Dev] sys.prefix woes
Message-ID: <04e601c0b136$52ee8e90$0900a8c0@SPIFF>

(windows, 2.0)

it looks like sys.prefix isn't set unless 1) PYTHONHOME is set, or
2) lib/os.py can be found somewhere between the directory your
executable is found in, and the root.

if neither is set, the path is taken from the registry, but sys.prefix
is left blank, and FixTk.py no longer works.

any ideas?  is this a bug?  is there an "official" workaround that
doesn't involve using the time machine to upgrade all BeOpen
and ActiveState kits?

Cheers /F



From guido@digicool.com  Tue Mar 20 12:48:09 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 07:48:09 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 00:02:59 GMT."
 <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>
 <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <200103201248.HAA29485@cj20424-a.reston1.va.home.com>

> >   MH> The dict's resizing, it turns out.
> > 
> > So a hack to make the iteration safe would be to assign and element
> > and then delete it?
> 
> Yes.  This would be gross beyond belief though.  Particularly as the
> normal case is for freevars to be empty.
> 
> >   MH> I note that in PyDict_SetItem, the check to see if the dict
> >   MH> needs resizing occurs *before* it is known whether the key is
> >   MH> already in the dict.  But if this is the problem, how come we
> >   MH> haven't been bitten by this before?
> > 
> > It's probably unusual for a dictionary to be in this state when the
> > compiler decides to update the values.
> 
> What I meant was that there are bits and pieces of code in the Python
> core that blithely pass keys gotten from PyDict_Next into
> PyDict_SetItem.

Where?

> From what I've just learnt, I'd expect this to
> occasionally cause glitches of extreme confusing-ness.  Though on
> investigation, I don't think any of these bits of code are sensitive
> to getting keys out multiple times (which is what happens in this case
> - though you must be able to miss keys too).  Might cause the odd leak
> here and there.

I'd fix the dict implementation, except that that's tricky.

Checking for a dup key in PyDict_SetItem() before calling dictresize()
slows things down.  Checking in insertdict() is wrong because
dictresize() uses that!

Jeremy, is there a way that you could fix your code to work around
this?  Let's talk about this when you get into the office.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Tue Mar 20 13:03:42 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 08:03:42 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
In-Reply-To: Your message of "Tue, 20 Mar 2001 00:12:17 EST."
 <15030.59057.866982.538935@anthem.wooz.org>
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>
 <15030.59057.866982.538935@anthem.wooz.org>
Message-ID: <200103201303.IAA29601@cj20424-a.reston1.va.home.com>

> >>>>> "GvR" == Guido van Rossum <guido@python.org> writes:
> 
>     GvR> So I see little chance for PEP 224.  Maybe I should just
>     GvR> pronounce on this, and declare the PEP rejected.
> 
> So, was that a BDFL pronouncement or not? :)
> 
> -Barry

Yes it was.  I really don't like the syntax, the binding between the
docstring and the documented identifier is too weak.  It's best to do
this explicitly, e.g.

    a = 12*12
    __doc_a__ = """gross"""

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mwh21@cam.ac.uk  Tue Mar 20 13:30:10 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 13:30:10 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Guido van Rossum's message of "Tue, 20 Mar 2001 07:48:09 -0500"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>
Message-ID: <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido@digicool.com> writes:

> > >   MH> The dict's resizing, it turns out.
> > > 
> > > So a hack to make the iteration safe would be to assign and element
> > > and then delete it?
> > 
> > Yes.  This would be gross beyond belief though.  Particularly as the
> > normal case is for freevars to be empty.
> > 
> > >   MH> I note that in PyDict_SetItem, the check to see if the dict
> > >   MH> needs resizing occurs *before* it is known whether the key is
> > >   MH> already in the dict.  But if this is the problem, how come we
> > >   MH> haven't been bitten by this before?
> > > 
> > > It's probably unusual for a dictionary to be in this state when the
> > > compiler decides to update the values.
> > 
> > What I meant was that there are bits and pieces of code in the Python
> > core that blithely pass keys gotten from PyDict_Next into
> > PyDict_SetItem.
> 
> Where?

import.c:PyImport_Cleanup
moduleobject.c:_PyModule_Clear

Hrm, I was sure there were more than that, but there don't seem to be.
Sorry for the alarmism.

> > From what I've just learnt, I'd expect this to
> > occasionally cause glitches of extreme confusing-ness.  Though on
> > investigation, I don't think any of these bits of code are sensitive
> > to getting keys out multiple times (which is what happens in this case
> > - though you must be able to miss keys too).  Might cause the odd leak
> > here and there.
> 
> I'd fix the dict implementation, except that that's tricky.

I'd got that far...

> Checking for a dup key in PyDict_SetItem() before calling dictresize()
> slows things down.  Checking in insertdict() is wrong because
> dictresize() uses that!

Maybe you could do the check for resize *after* the call to
insertdict?  I think that would work, but I wouldn't like to go
messing with such a performance critical bit of code without some
careful thinking.

Cheers,
M.

-- 
  You sound surprised.  We're talking about a government department
  here - they have procedures, not intelligence.
                                            -- Ben Hutchings, cam.misc



From mwh21@cam.ac.uk  Tue Mar 20 13:44:50 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 13:44:50 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Michael Hudson's message of "20 Mar 2001 13:30:10 +0000"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com> <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <m3ae6gh7vx.fsf@atrus.jesus.cam.ac.uk>

Michael Hudson <mwh21@cam.ac.uk> writes:

> Guido van Rossum <guido@digicool.com> writes:
> 
> > Checking for a dup key in PyDict_SetItem() before calling dictresize()
> > slows things down.  Checking in insertdict() is wrong because
> > dictresize() uses that!
> 
> Maybe you could do the check for resize *after* the call to
> insertdict?  I think that would work, but I wouldn't like to go
> messing with such a performance critical bit of code without some
> careful thinking.

Indeed; this tiny little patch:

Index: Objects/dictobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/dictobject.c,v
retrieving revision 2.73
diff -c -r2.73 dictobject.c
*** Objects/dictobject.c	2001/01/18 00:39:02	2.73
--- Objects/dictobject.c	2001/03/20 13:38:04
***************
*** 496,501 ****
--- 496,508 ----
  	Py_INCREF(value);
  	Py_INCREF(key);
  	insertdict(mp, key, hash, value);
+ 	/* if fill >= 2/3 size, double in size */
+ 	if (mp->ma_fill*3 >= mp->ma_size*2) {
+ 		if (dictresize(mp, mp->ma_used*2) != 0) {
+ 			if (mp->ma_fill+1 > mp->ma_size)
+ 				return -1;
+ 		}
+ 	}
  	return 0;
  }
  
fixes Ping's reported crash.  You can't naively (as I did at first)
*only* check after the insertdict, 'cause dicts are created with 0
size.

Currently building from scratch to do some performance testing.

Cheers,
M.

-- 
  It's a measure of how much I love Python that I moved to VA, where
  if things don't work out Guido will buy a plantation and put us to
  work harvesting peanuts instead.     -- Tim Peters, comp.lang.python



From fredrik@pythonware.com  Tue Mar 20 13:58:29 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 14:58:29 +0100
Subject: [Python-Dev] sys.prefix woes
References: <04e601c0b136$52ee8e90$0900a8c0@SPIFF>
Message-ID: <054e01c0b145$d9d727f0$0900a8c0@SPIFF>

I wrote:
> any ideas?  is this a bug?  is there an "official" workaround that
> doesn't involve using the time machine to upgrade all BeOpen
> and ActiveState kits?

I found a workaround (a place to put some app-specific python code
that runs before anyone actually attempts to use sys.prefix)

still looks like a bug, though.  I'll post it to sourceforge.

Cheers /F



From guido@digicool.com  Tue Mar 20 14:32:00 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 09:32:00 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 13:30:10 GMT."
 <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>
 <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <200103201432.JAA00360@cj20424-a.reston1.va.home.com>

> > Checking for a dup key in PyDict_SetItem() before calling dictresize()
> > slows things down.  Checking in insertdict() is wrong because
> > dictresize() uses that!
> 
> Maybe you could do the check for resize *after* the call to
> insertdict?  I think that would work, but I wouldn't like to go
> messing with such a performance critical bit of code without some
> careful thinking.

No, that could still decide to resize, couldn't it?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Tue Mar 20 14:33:20 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 09:33:20 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 13:30:10 GMT."
 <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>
 <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <200103201433.JAA00373@cj20424-a.reston1.va.home.com>

Ah, the solution is simple.  Check for identical keys only when about
to resize:

	/* if fill >= 2/3 size, double in size */
	if (mp->ma_fill*3 >= mp->ma_size*2) {
		***** test here *****
		if (dictresize(mp, mp->ma_used*2) != 0) {
			if (mp->ma_fill+1 > mp->ma_size)
				return -1;
		}
	}

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mwh21@cam.ac.uk  Tue Mar 20 15:13:35 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 15:13:35 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Guido van Rossum's message of "Tue, 20 Mar 2001 09:33:20 -0500"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com> <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> <200103201433.JAA00373@cj20424-a.reston1.va.home.com>
Message-ID: <m34rwoh3s0.fsf@atrus.jesus.cam.ac.uk>

Does anyone know how to reply to two messages gracefully in gnus?

Guido van Rossum <guido@digicool.com> writes:

> > Maybe you could do the check for resize *after* the call to
> > insertdict?  I think that would work, but I wouldn't like to go
> > messing with such a performance critical bit of code without some
> > careful thinking.
>
> No, that could still decide to resize, couldn't it?

Yes, but not when you're inserting on a key that is already in the
dictionary - because the resize would have happened when the key was
inserted into the dictionary, and thus the problem we're seeing here
wouldn't happen.

What's happening in Ping's test case is that the dict is in some sense
being prepped to resize when an item is added but not actually
resizing until PyDict_SetItem is called again, which is unfortunately
inside a PyDict_Next loop.

Guido van Rossum <guido@digicool.com> writes:

> Ah, the solution is simple.  Check for identical keys only when about
> to resize:
> 
> 	/* if fill >= 2/3 size, double in size */
> 	if (mp->ma_fill*3 >= mp->ma_size*2) {
> 		***** test here *****
> 		if (dictresize(mp, mp->ma_used*2) != 0) {
> 			if (mp->ma_fill+1 > mp->ma_size)
> 				return -1;
> 		}
> 	}

This might also do nasty things to performance - this code path gets
travelled fairly often for small dicts.

Does anybody know the average (mean/mode/median) size for dicts in
a "typical" python program?

  -------

Using mal's pybench with and without the patch I posted shows a 0.30%
slowdown, including these interesting lines:

                  DictCreation:    1662.80 ms   11.09 us  +34.23%
        SimpleDictManipulation:     764.50 ms    2.55 us  -15.67%

DictCreation repeatedly creates dicts of size 0 and 3.
SimpleDictManipulation repeatedly adds six elements to a dict and then
deletes them again.

Dicts of size 3 are likely to be the worst case wrt. my patch; without
it, they will have a ma_fill of 3 and a ma_size of 4 (but calling
PyDict_SetItem again will trigger a resize - this is what happens in
Ping's example), but with my patch they will always have an ma_fill of
3 and a ma_size of 8.  Hence why the DictCreation is so much worse,
and why I asked the question about average dict sizes.

Mind you, 6 is a similar edge case, so I don't know why
SimpleDictManipulation does better.  Maybe something to do with
collisions or memory behaviour.

Cheers,
M.

-- 
  I don't remember any dirty green trousers.
                                             -- Ian Jackson, ucam.chat



From skip@pobox.com (Skip Montanaro)  Tue Mar 20 15:19:54 2001
From: skip@pobox.com (Skip Montanaro) (Skip Montanaro)
Date: Tue, 20 Mar 2001 09:19:54 -0600 (CST)
Subject: [Python-Dev] zipfile.py - detect if zipinfo is a dir  (fwd)
Message-ID: <15031.29978.95112.488244@beluga.mojam.com>

--sKyh7lSXYH
Content-Type: text/plain; charset=us-ascii
Content-Description: message body text
Content-Transfer-Encoding: 7bit

Not sure why I received this note.  I am passing it along to Jim Ahlstrom
and python-dev.

Skip


--sKyh7lSXYH
Content-Type: message/rfc822
Content-Description: forwarded message
Content-Transfer-Encoding: 7bit

Return-Path: <nobody@sourceforge.net>
Received: from dolly2.pobox.com (spamsvr.pobox.com [208.210.124.100])
	by manatee.mojam.com (8.11.0/8.11.0) with ESMTP id f2KEcvl01634
	for <skip@manatee.mojam.com>; Tue, 20 Mar 2001 08:38:57 -0600
Received: from dolly2 (localhost.localdomain [127.0.0.1])
	by dolly2.pobox.com (Postfix) with ESMTP id 8FD1025E99
	for <skip@manatee.mojam.com>; Tue, 20 Mar 2001 09:38:57 -0500 (EST)
Received: from potrero.mojam.com (ns2.mojam.com [207.20.37.91])
	by dolly2.pobox.com (Postfix) with ESMTP id 9EC1925EAC
	for <skip@pobox.com>; Tue, 20 Mar 2001 09:38:56 -0500 (EST)
Received: from usw-sf-netmisc.sourceforge.net (usw-sf-sshgate.sourceforge.net [216.136.171.253])
	by potrero.mojam.com (8.9.3/8.9.3) with ESMTP id GAA31338
	for <skip@mojam.com>; Tue, 20 Mar 2001 06:38:56 -0800
Received: from usw-sf-web2-b.sourceforge.net
	([10.3.1.6] helo=usw-sf-web2.sourceforge.net ident=mail)
	by usw-sf-netmisc.sourceforge.net with esmtp (Exim 3.22 #1 (Debian))
	id 14fNHk-000169-00
	for <skip@mojam.com>; Tue, 20 Mar 2001 06:38:56 -0800
Received: from nobody by usw-sf-web2.sourceforge.net with local (Exim 3.22 #1 (Debian))
	id 14fNIF-0002rT-00
	for <skip@mojam.com>; Tue, 20 Mar 2001 06:39:27 -0800
Message-Id: <E14fNIF-0002rT-00@usw-sf-web2.sourceforge.net>
From: Stephane Matamontero <dev1.gemodek@t-online.de>
Sender: nobody <nobody@usw-pr-web.sourceforge.net>
To: skip@mojam.com
Subject: zipfile.py - detect if zipinfo is a dir 
Date: Tue, 20 Mar 2001 06:39:27 -0800

Hi,

I am just working with the zipfile.py module and found out
a way to check if a zipinfo object is a dir:
I created 2 testzipfiles on Win2000 and Linux with info-zip.

I checked on Python 2.0 on Win2000 the following code
in a file called testview.py (my file)
The constants were taken from io.h of MSVC 6.0, the 
isdirzipinfo() function which accepts a zipinfo object
works at least under Win2000/Linux.

Perhaps you can integrate the code in a future release.

Bye

 Stephane

-------------------------- code follows ----------------

#/* File attribute constants for _findfirst() */

_A_NORMAL= 0x00   # /* Normal file - No read/write 
restrictions */
_A_RDONLY= 0x01   # /* Read only file */
_A_HIDDEN= 0x02   # /* Hidden file */
_A_SYSTEM= 0x04   # /* System file */
_A_SUBDIR= 0x10   # /* Subdirectory */
_A_ARCH=   0x20   # /* Archive file */

def isdirzipinfo(zi):
    isdir=((zi.external_attr & 0xff) & _A_SUBDIR) !=0
    return isdir




--sKyh7lSXYH--


From tim.one@home.com  Tue Mar 20 16:01:21 2001
From: tim.one@home.com (Tim Peters)
Date: Tue, 20 Mar 2001 11:01:21 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m34rwoh3s0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEONJGAA.tim.one@home.com>

[Michael Hudson]
>>> Maybe you could do the check for resize *after* the call to
>>> insertdict?  I think that would work, but I wouldn't like to go
>>> messing with such a performance critical bit of code without some
>>> careful thinking.

[Guido]
>> No, that could still decide to resize, couldn't it?

[Michael]
> Yes, but not when you're inserting on a key that is already in the
> dictionary - because the resize would have happened when the key was
> inserted into the dictionary, and thus the problem we're seeing here
> wouldn't happen.

Careful:  this comment is only half the truth:

	/* if fill >= 2/3 size, double in size */

The dictresize following is also how dicts *shrink*.  That is, build up a
dict, delete a whole bunch of keys, and nothing at all happens to the size
until you call setitem again (actually, I think you need to call it more than
once -- the behavior is tricky).  In any case, that a key is already in the
dict does not guarantee that a dict won't resize (via shrinking) when doing a
setitem.

We could bite the bullet and add a new PyDict_AdjustSize function, just
duplicating the resize logic.  Then loops that know they won't be changing
the size can call that before starting.  Delicate, though.



From jim@interet.com  Tue Mar 20 17:42:11 2001
From: jim@interet.com (James C. Ahlstrom)
Date: Tue, 20 Mar 2001 12:42:11 -0500
Subject: [Python-Dev] Re: zipfile.py - detect if zipinfo is a dir  (fwd)
References: <15031.29978.95112.488244@beluga.mojam.com>
Message-ID: <3AB79673.C29C0BBE@interet.com>

Skip Montanaro wrote:
> 
> Not sure why I received this note.  I am passing it along to Jim Ahlstrom
> and python-dev.

Thanks.  I will look into it.

JimA


From fredrik@pythonware.com  Tue Mar 20 19:20:38 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 20:20:38 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF> <3AB62EAE.FCFD7C9F@lemburg.com>
Message-ID: <048401c0b172$dd6892a0$e46940d5@hagrid>

mal wrote:

>         return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

F_FRAVAIL, not F_BAVAIL

(and my plan is to make a statvfs subset available on
all platforms, which makes your code even simpler...)

Cheers /F



From jack@oratrix.nl  Tue Mar 20 20:34:51 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Tue, 20 Mar 2001 21:34:51 +0100
Subject: [Python-Dev] Test for case-sensitive imports?
Message-ID: <20010320203457.3A72EEA11D@oratrix.oratrix.nl>

Hmm, apparently the flurry of changes to the case-checking code in
import has broken the case-checks for the macintosh. I'll fix that,
but maybe we should add a testcase for case-sensitive import?

And a related point: the logic for determining whether to use a
mac-specific, windows-specific or unix-specific routine in the getpass 
module is error prone.

Why these two points are related is left as an exercise to the reader:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++


From jack@oratrix.nl  Tue Mar 20 20:47:37 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Tue, 20 Mar 2001 21:47:37 +0100
Subject: [Python-Dev] test_coercion failing
Message-ID: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>

Test_coercion fails on the Mac (current CVS sources) with
We expected (repr): '(1+0j)'
But instead we got: '(1-0j)'
test test_coercion failed -- Writing: '(1-0j)', expected: '(1+0j)'

The computation it was doing was "2 / (2+0j) =".

To my mathematical eye it shouldn't be complaining in the first place, 
but I assume this may be either a missing round() somewhere or a
symptom of a genuine bug.

Can anyone point me in the right direction?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++


From guido@digicool.com  Tue Mar 20 21:00:26 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 16:00:26 -0500
Subject: [Python-Dev] Test for case-sensitive imports?
In-Reply-To: Your message of "Tue, 20 Mar 2001 21:34:51 +0100."
 <20010320203457.3A72EEA11D@oratrix.oratrix.nl>
References: <20010320203457.3A72EEA11D@oratrix.oratrix.nl>
Message-ID: <200103202100.QAA01606@cj20424-a.reston1.va.home.com>

> Hmm, apparently the flurry of changes to the case-checking code in
> import has broken the case-checks for the macintosh. I'll fix that,
> but maybe we should add a testcase for case-sensitive import?

Thanks -- yes, please add a testcase!  ("import String" should do it,
right? :-)

> And a related point: the logic for determining whether to use a
> mac-specific, windows-specific or unix-specific routine in the getpass 
> module is error prone.

Can you fix that too?

> Why these two points are related is left as an exercise to the reader:-)

:-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mal@lemburg.com  Tue Mar 20 21:03:40 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Tue, 20 Mar 2001 22:03:40 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
 not?
References: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com> <01ec01c0b11b$ff9593c0$e46940d5@hagrid>
Message-ID: <3AB7C5AC.DE61F186@lemburg.com>

Fredrik Lundh wrote:
> 
> Tim Peters wrote:
> > One of the best things Python ever did was to introduce os.path.getsize() +
> > friends, saving the bulk of the world from needing to wrestle with the
> > obscure Unix stat() API.
> 
> yup (I remember lobbying for those years ago), but that doesn't
> mean that we cannot make already existing low-level APIs work
> on as many platforms as possible...
> 
> (just like os.popen etc)
> 
> adding os.statvfs for windows is pretty much a bug fix (for 2.1?),
> but adding a new API is not (2.2).
> 
> > os.chmod() is another x-platform teachability pain
> 
> shutil.chmod("file", "g+x"), anyone?

Wasn't shutil declared obsolete ?
 
> > if there's anything worth knowing in the bowels of statvfs(), let's
> > please spell it in a human-friendly way from the start.
> 
> how about os.path.getfreespace("path") and
> os.path.gettotalspace("path") ?

Anybody care to add the missing parts in:

import sys,os

try:
    os.statvfs

except AttributeError:
    # Win32 implementation...
    # Mac implementation...
    pass

else:
    import statvfs

    def freespace(path):
        """ freespace(path) -> integer
        Return the number of bytes available to the user on the file system
        pointed to by path."""
        s = os.statvfs(path)
        return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

if __name__=='__main__':
    path = sys.argv[1]
    print 'Free space on %s: %i kB (%i bytes)' % (path,
                                                  freespace(path) / 1024,
                                                  freespace(path))


totalspace() should be just as easy to add and I'm pretty
sure that you can get that information on *all* platforms
(not necessarily using the same APIs though).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@digicool.com  Tue Mar 20 21:16:32 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 16:16:32 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: Your message of "Tue, 20 Mar 2001 21:47:37 +0100."
 <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
Message-ID: <200103202116.QAA01770@cj20424-a.reston1.va.home.com>

> Test_coercion fails on the Mac (current CVS sources) with
> We expected (repr): '(1+0j)'
> But instead we got: '(1-0j)'
> test test_coercion failed -- Writing: '(1-0j)', expected: '(1+0j)'
> 
> The computation it was doing was "2 / (2+0j) =".
> 
> To my mathematical eye it shouldn't be complaining in the first place, 
> but I assume this may be either a missing round() somewhere or a
> symptom of a genuine bug.
> 
> Can anyone point me in the right direction?

Tim admits that he changed complex division and repr().  So that's
where you might want to look.  If you wait a bit, Tim will check his
algorithm to see if a "minus zero" can pop out of it.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From aahz@rahul.net  Tue Mar 20 21:38:27 2001
From: aahz@rahul.net (Aahz Maruch)
Date: Tue, 20 Mar 2001 13:38:27 -0800 (PST)
Subject: [Python-Dev] Function in os module for available disk space, why
In-Reply-To: <3AB7C5AC.DE61F186@lemburg.com> from "M.-A. Lemburg" at Mar 20, 2001 10:03:40 PM
Message-ID: <20010320213828.2D30F99C80@waltz.rahul.net>

M.-A. Lemburg wrote:
> 
> Wasn't shutil declared obsolete ?

<blink>  What?!
-- 
                      --- Aahz (@pobox.com)

Hugs and backrubs -- I break Rule 6             http://www.rahul.net/aahz
Androgynous poly kinky vanilla queer het

I don't really mind a person having the last whine, but I do mind
someone else having the last self-righteous whine.


From paul@pfdubois.com  Tue Mar 20 23:56:06 2001
From: paul@pfdubois.com (Paul F. Dubois)
Date: Tue, 20 Mar 2001 15:56:06 -0800
Subject: [Python-Dev] PEP 242 Released
Message-ID: <ADEOIFHFONCLEEPKCACCGEANCHAA.paul@pfdubois.com>

PEP: 242
Title: Numeric Kinds
Version: $Revision: 1.1 $
Author: paul@pfdubois.com (Paul F. Dubois)
Status: Draft
Type: Standards Track
Created: 17-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    This proposal gives the user optional control over the precision
    and range of numeric computations so that a computation can be
    written once and run anywhere with at least the desired precision
    and range.  It is backward compatible with existing code.  The
    meaning of decimal literals is clarified.


Rationale

    Currently it is impossible in every language except Fortran 90 to
    write a program in a portable way that uses floating point and
    gets roughly the same answer regardless of platform -- or refuses
    to compile if that is not possible.  Python currently has only one
    floating point type, equal to a C double in the C implementation.

    No type exists corresponding to single or quad floats.  It would
    complicate the language to try to introduce such types directly
    and their subsequent use would not be portable.  This proposal is
    similar to the Fortran 90 "kind" solution, adapted to the Python
    environment.  With this facility an entire calculation can be
    switched from one level of precision to another by changing a
    single line.  If the desired precision does not exist on a
    particular machine, the program will fail rather than get the
    wrong answer.  Since coding in this style would involve an early
    call to the routine that will fail, this is the next best thing to
    not compiling.


Supported Kinds

    Each Python compiler may define as many "kinds" of integer and
    floating point numbers as it likes, except that it must support at
    least two kinds of integer corresponding to the existing int and
    long, and must support at least one kind of floating point number,
    equivalent to the present float.  The range and precision of the
    these kinds are processor dependent, as at present, except for the
    "long integer" kind, which can hold an arbitrary integer.  The
    built-in functions int(), float(), long() and complex() convert
    inputs to these default kinds as they do at present.  (Note that a
    Unicode string is actually a different "kind" of string and that a
    sufficiently knowledgeable person might be able to expand this PEP
    to cover that case.)

    Within each type (integer, floating, and complex) the compiler
    supports a linearly-ordered set of kinds, with the ordering
    determined by the ability to hold numbers of an increased range
    and/or precision.


Kind Objects

    Three new standard functions are defined in a module named
    "kinds".  They return callable objects called kind objects.  Each
    int or floating kind object f has the signature result = f(x), and
    each complex kind object has the signature result = f(x, y=0.).

    int_kind(n)
        For n >= 1, return a callable object whose result is an
        integer kind that will hold an integer number in the open
        interval (-10**n,10**n).  This function always succeeds, since
        it can return the 'long' kind if it has to. The kind object
        accepts arguments that are integers including longs.  If n ==
        0, returns the kind object corresponding to long.

    float_kind(nd, n)
        For nd >= 0 and n >= 1, return a callable object whose result
        is a floating point kind that will hold a floating-point
        number with at least nd digits of precision and a base-10
        exponent in the open interval (-n, n).  The kind object
        accepts arguments that are integer or real.

    complex_kind(nd, n)
        Return a callable object whose result is a complex kind that
        will will hold a complex number each of whose components
        (.real, .imag) is of kind float_kind(nd, n).  The kind object
        will accept one argument that is integer, real, or complex, or
        two arguments, each integer or real.

    The compiler will return a kind object corresponding to the least
    of its available set of kinds for that type that has the desired
    properties.  If no kind with the desired qualities exists in a
    given implementation an OverflowError exception is thrown.  A kind
    function converts its argument to the target kind, but if the
    result does not fit in the target kind's range, an OverflowError
    exception is thrown.

    Kind objects also accept a string argument for conversion of
    literal notation to their kind.

    Besides their callable behavior, kind objects have attributes
    giving the traits of the kind in question.  The list of traits
    needs to be completed.


The Meaning of Literal Values

    Literal integer values without a trailing L are of the least
    integer kind required to represent them.  An integer literal with
    a trailing L is a long.  Literal decimal values are of the
    greatest available binary floating-point kind.


Concerning Infinite Floating Precision

    This section makes no proposals and can be omitted from
    consideration.  It is for illuminating an intentionally
    unimplemented 'corner' of the design.

    This PEP does not propose the creation of an infinite precision
    floating point type, just leaves room for it.  Just as int_kind(0)
    returns the long kind object, if in the future an infinitely
    precise decimal kind is available, float_kind(0,0) could return a
    function that converts to that type.  Since such a kind function
    accepts string arguments, programs could then be written that are
    completely precise.  Perhaps in analogy to r'a raw string', 1.3r
    might be available as syntactic sugar for calling the infinite
    floating kind object with argument '1.3'.  r could be thought of
    as meaning 'rational'.


Complex numbers and kinds

    Complex numbers are always pairs of floating-point numbers with
    the same kind.  A Python compiler must support a complex analog of
    each floating point kind it supports, if it supports complex
    numbers at all.


Coercion

    In an expression, coercion between different kinds is to the
    greater kind.  For this purpose, all complex kinds are "greater
    than" all floating-point kinds, and all floating-point kinds are
    "greater than" all integer kinds.


Examples

    In module myprecision.py:

        import kinds
        tinyint = kinds.int_kind(1)
        single = kinds.float_kind(6, 90)
        double = kinds.float_kind(15, 300)
        csingle = kinds.complex_kind(6, 90)

    In the rest of my code:

        from myprecision import tinyint, single, double, csingle
        n = tinyint(3)
        x = double(1.e20)
        z = 1.2
        # builtin float gets you the default float kind, properties unknown
        w = x * float(x)
        w = x * double(z)
        u = csingle(x + z * 1.0j)
        u2 = csingle(x+z, 1.0)

    Note how that entire code can then be changed to a higher
    precision by changing the arguments in myprecision.py.

    Comment: note that you aren't promised that single != double; but
    you are promised that double(1.e20) will hold a number with 15
    decimal digits of precision and a range up to 10**300 or that the
    float_kind call will fail.


Open Issues

    The assertion that a decimal literal means a binary floating-point
    value of the largest available kind is in conflict with other
    proposals about Python's numeric model.  This PEP asserts that
    these other proposals are wrong and that part of them should not
    be implemented.

    Determine the exact list of traits for integer and floating point
    numbers.  There are some standard Fortran routines that do this
    but I have to track them down.  Also there should be information
    sufficient to create a Numeric array of an equal or greater kind.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:



From biotechinfo2003@yahoo.com  Tue Mar 20 18:09:52 2001
From: biotechinfo2003@yahoo.com (biotechinfo2003@yahoo.com)
Date: Tue, 20 Mar 2001 18:09:52
Subject: [Python-Dev] FREE Biotech Stock Info!    933
Message-ID: <309.140226.543818@excite.com>

<html>

<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
<meta name="GENERATOR" content="Microsoft FrontPage 4.0">
<meta name="ProgId" content="FrontPage.Editor.Document">
<title>Do you want to capitalize on the Biotech Revolution</title>
</head>

<body>

<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto" align="center"><img border="0" src="http://www.geocities.com/mailtestbox2001/Kiloh_logo.gif" width="204" height="170"></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-family:Arial">Do
you want to capitalize on the Biotech Revolution? Would you like to add
groundbreaking biotech, pharmaceutical and medical device companies to your
portfolio mix? Does hearing about exciting IPO and private placement offerings
from life sciences companies interest you?</span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-family:Arial">The
exclusive <b>Ruddy-Carlisle Biotech Infoline</b> service keeps you abreast of
investment opportunities in the life sciences space. Just sign up for it once
and get important information instantly delivered to study at your leisure. Our
service is <b><u>100% FREE</u></b>! <b><span style="color:blue"><a href="mailto:biotechsubscribe2@yahoo.com">Sign
up!</a></span></b></span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><b><i><span style="font-size:11.0pt;mso-bidi-font-size:12.0pt;font-family:Arial;color:#003366">Ruddy-Carlisle
Biotech Infoline:</span></i></b></p>
<ul type="disc">
  <li class="MsoNormal" style="color:#003366;mso-margin-top-alt:auto;mso-margin-bottom-alt:
     auto;mso-list:l0 level1 lfo1;tab-stops:list .5in"><b><i><span style="font-size:11.0pt;mso-bidi-font-size:12.0pt;font-family:Arial">Instantly
    delivers key life sciences investment information directly to you! </span></i></b><o:p>
    </o:p>
  </li>
  <li class="MsoNormal" style="color:#003366;mso-margin-top-alt:auto;mso-margin-bottom-alt:
     auto;mso-list:l0 level1 lfo1;tab-stops:list .5in"><b><i><span style="font-size:11.0pt;mso-bidi-font-size:12.0pt;font-family:Arial">Learn
    about biotech, pharmaceutical &amp; medical device investment opportunities
    before others! </span></i></b><o:p>
    </o:p>
  </li>
  <li class="MsoNormal" style="color:#003366;mso-margin-top-alt:auto;mso-margin-bottom-alt:
     auto;mso-list:l0 level1 lfo1;tab-stops:list .5in"><b><i><span style="font-size:11.0pt;mso-bidi-font-size:12.0pt;font-family:Arial">Includes
    IPO &amp; private placement information! </span></i></b><o:p>
    </o:p>
  </li>
  <li class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;
     mso-list:l0 level1 lfo1;tab-stops:list .5in"><b><i><span style="font-size:
     11.0pt;mso-bidi-font-size:12.0pt;font-family:Arial;color:#003366">100%
    FREE!</span></i></b></li>
</ul>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><span style="font-family:Arial">For
the entire last decade there were only three profitable biotech companies. At
the end of this year, ten are projected. At the end of 2003, <u>over forty</u>
are projected! The genomic promise is about to be delivered and investors know
it. The <b>Ruddy-Carlisle Biotech Infoline </b>provides you with critical,
decision-making, information that aids the chance of investment success in this
lucrative space. <b><span style="color:blue"><a href="mailto:biotechsubscribe2@yahoo.com">Sign
up!</a></span></b></span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto"><b><span style="font-family:Arial">Please
Note-</span></b><span style="font-family:Arial"> Your information will only be
shared with companies that are in the life sciences space <u>and</u> pass our
rigorous inspection. Only the best opportunities will come to you.
Ruddy-Carlisle respects your privacy. <b><span style="color:blue"><a href="mailto:biotechsubscribe2@yahoo.com">Sign
up!</a></span></b></span></p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">&nbsp;</p>
<p class="MsoNormal" style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto">&nbsp;</p>
<b><span style="font-size:10.0pt;mso-bidi-font-size:12.0pt;font-family:Arial;
mso-fareast-font-family:&quot;Times New Roman&quot;;mso-ansi-language:EN-US;mso-fareast-language:
EN-US;mso-bidi-language:AR-SA">
</p>
</p>List Removal Instructions</span></b><span style="font-size:10.0pt;mso-bidi-font-size:12.0pt;font-family:Arial;mso-fareast-font-family:
&quot;Times New Roman&quot;;mso-ansi-language:EN-US;mso-fareast-language:EN-US;
mso-bidi-language:AR-SA">- Simply click here: <b><span style="color:blue"><a href="mailto:remobiotech3@yahoo.com">remove</a></span></b>
to be instantly and permanently removed from our list. Send the blank email to
the address specified. Please do not try to reply to this message.</span>

</body>

</html>


From tim.one@home.com  Wed Mar 21 03:33:15 2001
From: tim.one@home.com (Tim Peters)
Date: Tue, 20 Mar 2001 22:33:15 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>

Everyone!  Run this program under current CVS:

x = 0.0
print "%.17g" % -x
print "%+.17g" % -x

What do you get?  WinTel prints "0" for the first and "+0" for the second.

C89 doesn't define the results.

C99 requires "-0" for both (on boxes with signed floating zeroes, which is
virtually all boxes today due to IEEE 754).

I don't want to argue the C rules, I just want to know whether this *does*
vary across current platforms.



From tim.one@home.com  Wed Mar 21 03:46:04 2001
From: tim.one@home.com (Tim Peters)
Date: Tue, 20 Mar 2001 22:46:04 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <200103202116.QAA01770@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBDJHAA.tim.one@home.com>

[Guido]
> ...
> If you wait a bit, Tim will check his algorithm to see if
> a "minus zero" can pop out of it.

I'm afraid Jack will have to work harder than that.  He should have gotten a
minus 0 out of this one if and only if he got a minus 0 before, and under 754
rules he *will* get a minus 0 if and only if he told his 754 hardware to use
its "to minus infinity" rounding mode.

Is test_coercion failing on any platform other than Macintosh?



From tim.one@home.com  Wed Mar 21 04:01:13 2001
From: tim.one@home.com (Tim Peters)
Date: Tue, 20 Mar 2001 23:01:13 -0500
Subject: [Python-Dev] Test for case-sensitive imports?
In-Reply-To: <200103202100.QAA01606@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBEJHAA.tim.one@home.com>

[ Guido van Rossum]
> Hmm, apparently the flurry of changes to the case-checking code in
> import has broken the case-checks for the macintosh.

Hmm.  This should have been broken way back in 2.1a1, as the code you later
repaired was introduced by the first release of Mac OS X changes.  Try to
stay more current in the future <wink>.

> I'll fix that, but maybe we should add a testcase for
> case-sensitive import?

Yup!  Done now.



From uche.ogbuji@fourthought.com  Wed Mar 21 04:23:01 2001
From: uche.ogbuji@fourthought.com (Uche Ogbuji)
Date: Tue, 20 Mar 2001 21:23:01 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation
In-Reply-To: Message from "Tim Peters" <tim.one@home.com>
 of "Tue, 20 Mar 2001 02:08:53 EST." <LNBBLJKPBEHFEDALKOLCMENKJGAA.tim.one@home.com>
Message-ID: <200103210423.VAA20300@localhost.localdomain>

> [Uche Ogbuji]
> > Quite interesting.  I brought up this *exact* point at the
> > Stackless BOF at IPC9.  I mentioned that the immediate reason
> > I was interested in Stackless was to supercharge the efficiency
> > of 4XSLT.  I think that a stackless 4XSLT could pretty much
> > annihilate the other processors in the field for performance.
> 
> Hmm.  I'm interested in clarifying the cost/performance boundaries of the
> various approaches.  I don't understand XSLT (I don't even know what it is).
> Do you grok the difference between full-blown Stackless and Icon-style
> generators?

To a decent extent, based on reading your posts carefully.

> The correspondent I quoted believed the latter were on-target
> for XSLT work, and given the way Python works today generators are easier to
> implement than full-blown Stackless.  But while I can speak with some
> confidence about the latter, I don't know whether they're sufficient for what
> you have in mind.

Based on a discussion with Christian at IPC9, they are.  I should have been 
more clear about that.  My main need is to be able to change a bit of context 
and invoke a different execution path, without going through the full overhead 
of a function call.  XSLT, if written "naturally", tends to involve huge 
numbers of such tweak-context-and-branch operations.

> If this is some flavor of one-at-time tree-traversal algorithm, generators
> should suffice.
> 
> class TreeNode:
>     # with self.value
>     #      self.children, a list of TreeNode objects
>     ...
>     def generate_kids(self):  # pre-order traversal
>         suspend self.value
>         for kid in self.children:
>             for itskids in kid.generate_kids():
>                 suspend itskids
> 
> for k in someTreeNodeObject.generate_kids():
>     print k
> 
> So the control-flow is thoroughly natural, but you can only suspend to your
> immediate invoker (in recursive traversals, this "walks up the chain" of
> generators for each result).  With explicitly resumable generator objects,
> multiple trees (or even general graphs -- doesn't much matter) can be
> traversed in lockstep (or any other interleaving that's desired).
> 
> Now decide <wink>.

Suspending only to the invoker should do the trick because it is typically a 
single XSLT instruction that governs multiple tree-operations with varied 
context.

At IPC9, Guido put up a poll of likely use of stackless features, and it was a 
pretty clear arithmetic progression from those who wanted to use microthreads, 
to those who wanted co-routines, to those who wanted just generators.  The 
generator folks were probably 2/3 of the assembly.  Looks as if many have 
decided, and they seem to agree with you.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji@fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python




From greg@cosc.canterbury.ac.nz  Wed Mar 21 04:49:33 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Mar 2001 16:49:33 +1200 (NZST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>

>     def generate_kids(self):  # pre-order traversal
>         suspend self.value
>         for kid in self.children:
>             for itskids in kid.generate_kids():
>                 suspend itskids

Can I make a suggestion: If we're going to get this generator
stuff, I think it would read better if the suspending statement
were

   yield x

rather than

   suspend x

because x is not the thing that we are suspending!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From fdrake@acm.org  Wed Mar 21 04:58:10 2001
From: fdrake@acm.org (Fred L. Drake)
Date: Tue, 20 Mar 2001 23:58:10 -0500
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
Message-ID: <web-1702694@digicool.com>

Greg Ewing <greg@cosc.canterbury.ac.nz> wrote:
 > stuff, I think it would read better if the suspending
 > statement were
 > 
 >    yield x
 > 
 > rather than
 > 
 >    suspend x

  I agree; this really improves readability.  I'm sure
someone knows of precedence for the "suspend" keyword, but
the only one I recall seeing before is "yeild" (Sather).


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations


From nas@arctrix.com  Wed Mar 21 05:04:42 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Tue, 20 Mar 2001 21:04:42 -0800
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>; from tim.one@home.com on Tue, Mar 20, 2001 at 10:33:15PM -0500
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <20010320210442.A22819@glacier.fnational.com>

On Tue, Mar 20, 2001 at 10:33:15PM -0500, Tim Peters wrote:
> Everyone!  Run this program under current CVS:

There are probably lots of Linux testers around but here's what I
get:

    Python 2.1b2 (#2, Mar 20 2001, 23:52:29) 
    [GCC 2.95.3 20010219 (prerelease)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> x = 0.0
    >>> print "%.17g" % -x
    -0
    >>> print "%+.17g" % -x
    -0

libc is GNU 2.2.2  (if that matters).  test_coerion works for me
too.  Is test_coerce testing too much accidental implementation
behavior?

  Neil


From ping@lfw.org  Wed Mar 21 06:14:57 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Tue, 20 Mar 2001 22:14:57 -0800 (PST)
Subject: [Python-Dev] Re: Generator syntax
In-Reply-To: <web-1702694@digicool.com>
Message-ID: <Pine.LNX.4.10.10103202213070.4368-100000@skuld.kingmanhall.org>

Greg Ewing <greg@cosc.canterbury.ac.nz> wrote:
> stuff, I think it would read better if the suspending
> statement were
> 
>    yield x
> 
> rather than
> 
>    suspend x

Fred Drake wrote:
>   I agree; this really improves readability.

Indeed, shortly after i wrote my generator examples, i wished i'd
written "generate x" rather than "suspend x".  "yield x" is good too.


-- ?!ng

Happiness comes more from loving than being loved; and often when our
affection seems wounded it is only our vanity bleeding. To love, and
to be hurt often, and to love again--this is the brave and happy life.
    -- J. E. Buchrose 



From tim.one@home.com  Wed Mar 21 07:15:23 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 02:15:23 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010320210442.A22819@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEBNJHAA.tim.one@home.com>

[Neil Schemenauer, among others confirming Linux behavior]
> There are probably lots of Linux testers around but here's what I
> get:
>
>     Python 2.1b2 (#2, Mar 20 2001, 23:52:29)
>     [GCC 2.95.3 20010219 (prerelease)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 0.0
>     >>> print "%.17g" % -x
>     -0
>     >>> print "%+.17g" % -x
>     -0
>
> libc is GNU 2.2.2  (if that matters).

Indeed, libc is probably the *only* that matters (Python defers to the
platform libc for float formatting).

> test_coerion works for me too.  Is test_coerce testing too much
> accidental implementation behavior?

I don't think so.  As a later message said, Jack *should* be getting a minus
0 if and only if he's running on an IEEE-754 box (extremely likely) and set
the rounding mode to minus-infinity (extremely unlikely).

But we don't yet know what the above prints on *his* box, so still don't know
whether that's relevant.

WRT display of signed zeroes (which may or may not have something to do with
Jack's problem), Python obviously varies across platforms.  But there is no
portable way in C89 to determine the sign of a zero, so we either live with
the cross-platform discrepancies, or force zeroes on output to always be
positive (in opposition to what C99 mandates).  (Note that I reject out of
hand that we #ifdef the snot out of the code to be able to detect the sign of
a 0 on various platforms -- Python doesn't conform to any other 754 rules,
and this one is minor.)

Ah, this is coming back to me now:  at Dragon this also popped up in our C++
code.  At least one flavor of Unix there also displayed -0 as if positive.  I
fiddled our output to suppress it, a la

def output(afloat):
    if not afloat:
        afloat *= afloat  # forces -0 and +0 to +0
    print afloat

(but in C++ <wink>).

would-rather-understand-jack's-true-problem-than-cover-up-a-
   symptom-ly y'rs  - tim



From fredrik@effbot.org  Wed Mar 21 07:26:26 2001
From: fredrik@effbot.org (Fredrik Lundh)
Date: Wed, 21 Mar 2001 08:26:26 +0100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
References: <web-1702694@digicool.com>
Message-ID: <012601c0b1d8$7dc3cc50$e46940d5@hagrid>

the real fred wrote:

> I agree; this really improves readability.  I'm sure someone
> knows of precedence for the "suspend" keyword

Icon

(the suspend keyword "leaves the generating function
in suspension")

> but the only one I recall seeing before is "yeild" (Sather).

I associate "yield" with non-preemptive threading (yield
to anyone else, not necessarily my caller).

Cheers /F



From tim.one@home.com  Wed Mar 21 07:25:42 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 02:25:42 -0500
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>

I also like "yield", but when talking about Icon-style generators to people
who may not be familiar with them, I'll continue to use "suspend" (since
that's the word they'll see in the Icon docs, and they can get many more
examples from the latter than from me).



From tommy@ilm.com  Wed Mar 21 07:27:12 2001
From: tommy@ilm.com (Flying Cougar Burnette)
Date: Tue, 20 Mar 2001 23:27:12 -0800 (PST)
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
 <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <15032.22433.953503.130175@mace.lucasdigital.com>

I get the same ("0" then "+0") on my irix65 O2.  test_coerce succeeds
as well.


Tim Peters writes:
| Everyone!  Run this program under current CVS:
| 
| x = 0.0
| print "%.17g" % -x
| print "%+.17g" % -x
| 
| What do you get?  WinTel prints "0" for the first and "+0" for the second.
| 
| C89 doesn't define the results.
| 
| C99 requires "-0" for both (on boxes with signed floating zeroes, which is
| virtually all boxes today due to IEEE 754).
| 
| I don't want to argue the C rules, I just want to know whether this *does*
| vary across current platforms.
| 
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev@python.org
| http://mail.python.org/mailman/listinfo/python-dev


From tommy@ilm.com  Wed Mar 21 07:37:00 2001
From: tommy@ilm.com (Flying Cougar Burnette)
Date: Tue, 20 Mar 2001 23:37:00 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
Message-ID: <15032.22504.605383.113425@mace.lucasdigital.com>

Hey Gang,

Given the latest state of the CVS tree I am getting the following
failures on my irix65 O2 (and have been for quite some time- I'm just
now getting around to reporting them):


------------%< snip %<----------------------%< snip %<------------

test_pty
The actual stdout doesn't match the expected stdout.
This much did match (between asterisk lines):
**********************************************************************
test_pty
**********************************************************************
Then ...
We expected (repr): 'I'
But instead we got: '\n'
test test_pty failed -- Writing: '\n', expected: 'I'


importing test_pty into an interactive interpreter gives this:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import test.test_pty
Calling master_open()
Got master_fd '4', slave_name '/dev/ttyq6'
Calling slave_open('/dev/ttyq6')
Got slave_fd '5'
Writing to slave_fd

I wish to buy a fish license.For my pet fish, Eric.
calling pty.fork()
Waiting for child (16654) to finish.
Child (16654) exited with status 1024.
>>> 

------------%< snip %<----------------------%< snip %<------------

test_symtable
test test_symtable crashed -- exceptions.TypeError: unsubscriptable object


running the code test_symtable code by hand in the interpreter gives
me:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import _symtable
>>> symbols = _symtable.symtable("def f(x): return x", "?", "exec")
>>> symbols
<symtable entry global(0), line 0>
>>> symbols[0]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsubscriptable object


------------%< snip %<----------------------%< snip %<------------

test_zlib
make: *** [test] Segmentation fault (core dumped)


when I run python in a debugger and import test_zlib by hand I get
this:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import test.test_zlib
0xe5c1a120 0x43b6aa94
0xbd602f7 0xbd602f7
expecting Bad compression level
expecting Invalid initialization option
expecting Invalid initialization option
normal compression/decompression succeeded
compress/decompression obj succeeded
decompress with init options succeeded
decompressobj with init options succeeded

the faliure is on line 86 of test_zlib.py (calling obj.flush()).
here are the relevant portions of the call stack (sorry they're
stripped):

t_delete(<stripped>) ["malloc.c":801]
realfree(<stripped>) ["malloc.c":531]
cleanfree(<stripped>) ["malloc.c":944]
_realloc(<stripped>) ["malloc.c":329]
_PyString_Resize(<stripped>) ["stringobject.c":2433]
PyZlib_flush(<stripped>) ["zlibmodule.c":595]
call_object(<stripped>) ["ceval.c":2706]
...


From mal@lemburg.com  Wed Mar 21 10:02:54 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:02:54 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
References: <20010320213828.2D30F99C80@waltz.rahul.net>
Message-ID: <3AB87C4E.450723C2@lemburg.com>

Aahz Maruch wrote:
> 
> M.-A. Lemburg wrote:
> >
> > Wasn't shutil declared obsolete ?
> 
> <blink>  What?!

Guido once pronounced on this... mostly because of the comment
at the top regarding cross-platform compatibility:

"""Utility functions for copying files and directory trees.

XXX The functions here don't copy the resource fork or other metadata on Mac.

"""

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Wed Mar 21 10:41:38 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:41:38 +0100
Subject: [Python-Dev] Re: What has become of PEP224 ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com> <15030.59057.866982.538935@anthem.wooz.org>
Message-ID: <3AB88562.F6FB0042@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "GvR" == Guido van Rossum <guido@python.org> writes:
> 
>     GvR> So I see little chance for PEP 224.  Maybe I should just
>     GvR> pronounce on this, and declare the PEP rejected.
> 
> So, was that a BDFL pronouncement or not? :)

I guess so. 

I'll add Guido's comments (the ones he mailed me in
private) to the PEP and then forget about the idea of getting
doc-strings to play nice with attributes... :-(

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From mal@lemburg.com  Wed Mar 21 10:46:01 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:46:01 +0100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>
Message-ID: <3AB88669.3FDC1DE3@lemburg.com>

Mark Hammond wrote:
> 
> OK - it appears everyone agrees we should go the "Unicode API" route.  I
> actually thought my scheme did not preclude moving to this later.
> 
> This is a much bigger can of worms than I have bandwidth to take on at the
> moment.  As Martin mentions, what will os.listdir() return on Win9x vs
> Win2k?  What does passing a Unicode object to a non-Unicode Win32 platform
> mean? etc.  How do Win95/98/ME differ in their Unicode support?  Do the
> various service packs for each of these change the basic support?
> 
> So unfortunately this simply means the status quo remains until someone
> _does_ have the time and inclination.  That may well be me in the future,
> but is not now.  It also means that until then, Python programmers will
> struggle with this and determine that they can make it work simply by
> encoding the Unicode as an "mbcs" string.  Or worse, they will note that
> "latin1 seems to work" and use that even though it will work "less often"
> than mbcs.  I was simply hoping to automate that encoding using a scheme
> that works "most often".
> 
> The biggest drawback is that by doing nothing we are _encouraging_ the user
> to write broken code.  The way things stand at the moment, the users will
> _never_ pass Unicode objects to these APIs (as they dont work) and will
> therefore manually encode a string.  To my mind this is _worse_ than what my
> scheme proposes - at least my scheme allows Unicode objects to be passed to
> the Python functions - python may choose to change the way it handles these
> in the future.  But by forcing the user to encode a string we have lost
> _all_ meaningful information about the Unicode object and can only hope they
> got the encoding right.
> 
> If anyone else decides to take this on, please let me know.  However, I fear
> that in a couple of years we may still be waiting and in the meantime people
> will be coding hacks that will _not_ work in the new scheme.

Ehm, AFAIR, the Windows CRT APIs can take MBCS character input,
so why don't we go that route first and then later switch on
to full Unicode support ?

After all, I added the "es#" parser markers because you bugged me about
wanting to use them for Windows in the MBCS context -- you even
wrote up the MBCS codec... all this code has to be good for 
something ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From thomas@xs4all.net  Wed Mar 21 11:08:34 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 21 Mar 2001 12:08:34 +0100
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>; from tim.one@home.com on Tue, Mar 20, 2001 at 10:33:15PM -0500
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <20010321120833.Q29286@xs4all.nl>

On Tue, Mar 20, 2001 at 10:33:15PM -0500, Tim Peters wrote:
> Everyone!  Run this program under current CVS:

> x = 0.0
> print "%.17g" % -x
> print "%+.17g" % -x

> What do you get?  WinTel prints "0" for the first and "+0" for the second.

On BSDI (both 4.0 (gcc 2.7.2.1) and 4.1 (egcs 1.1.2 (2.91.66)) as well as
FreeBSD 4.2 (gcc 2.95.2):

>>> x = 0.0
>>> print "%.17g" % -x
0
>>> print "%+.17g" % -x
+0

Note that neither use GNU libc even though they use gcc.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jack@oratrix.nl  Wed Mar 21 11:31:07 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 12:31:07 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: Message by "Mark Hammond" <MarkH@ActiveState.com> ,
 Mon, 19 Mar 2001 20:40:24 +1100 , <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>
Message-ID: <20010321113107.A325B36B2C1@snelboot.oratrix.nl>

> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.
> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.
> 
> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
> ascii versions of the functions means that the worst thing that can happen
> is we get a regular file-system error if an mbcs encoded string is passed on
> a non-Unicode platform.
> 
> Does anyone have any objections to this scheme or see any drawbacks in it?
> If not, I'll knock up a patch...

The Mac has a very similar problem here: unless you go to the unicode APIs 
(which is pretty much impossible for stdio calls and such at the moment) you 
have to use the "current" 8-bit encoding for filenames.

Could you put your patch in such a shape that it could easily be adapted for 
other platforms? Something like PyOS_8BitFilenameFromUnicodeObject(PyObject *, 
char *, int) or so?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From tismer@tismer.com  Wed Mar 21 12:52:05 2001
From: tismer@tismer.com (Christian Tismer)
Date: Wed, 21 Mar 2001 13:52:05 +0100
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <3AB8A3F5.D79F7AD8@tismer.com>


Uche Ogbuji wrote:
> 
> > [Uche Ogbuji]
> > > Quite interesting.  I brought up this *exact* point at the
> > > Stackless BOF at IPC9.  I mentioned that the immediate reason
> > > I was interested in Stackless was to supercharge the efficiency
> > > of 4XSLT.  I think that a stackless 4XSLT could pretty much
> > > annihilate the other processors in the field for performance.
> >
> > Hmm.  I'm interested in clarifying the cost/performance boundaries of the
> > various approaches.  I don't understand XSLT (I don't even know what it is).
> > Do you grok the difference between full-blown Stackless and Icon-style
> > generators?
> 
> To a decent extent, based on reading your posts carefully.
> 
> > The correspondent I quoted believed the latter were on-target
> > for XSLT work, and given the way Python works today generators are easier to
> > implement than full-blown Stackless.  But while I can speak with some
> > confidence about the latter, I don't know whether they're sufficient for what
> > you have in mind.
> 
> Based on a discussion with Christian at IPC9, they are.  I should have been
> more clear about that.  My main need is to be able to change a bit of context
> and invoke a different execution path, without going through the full overhead
> of a function call.  XSLT, if written "naturally", tends to involve huge
> numbers of such tweak-context-and-branch operations.
> 
> > If this is some flavor of one-at-time tree-traversal algorithm, generators
> > should suffice.
> >
> > class TreeNode:
> >     # with self.value
> >     #      self.children, a list of TreeNode objects
> >     ...
> >     def generate_kids(self):  # pre-order traversal
> >         suspend self.value
> >         for kid in self.children:
> >             for itskids in kid.generate_kids():
> >                 suspend itskids
> >
> > for k in someTreeNodeObject.generate_kids():
> >     print k
> >
> > So the control-flow is thoroughly natural, but you can only suspend to your
> > immediate invoker (in recursive traversals, this "walks up the chain" of
> > generators for each result).  With explicitly resumable generator objects,
> > multiple trees (or even general graphs -- doesn't much matter) can be
> > traversed in lockstep (or any other interleaving that's desired).
> >
> > Now decide <wink>.
> 
> Suspending only to the invoker should do the trick because it is typically a
> single XSLT instruction that governs multiple tree-operations with varied
> context.
> 
> At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> pretty clear arithmetic progression from those who wanted to use microthreads,
> to those who wanted co-routines, to those who wanted just generators.  The
> generator folks were probably 2/3 of the assembly.  Looks as if many have
> decided, and they seem to agree with you.

Here the exact facts of the poll:

     microthreads: 26
     co-routines:  35
     generators:   44

I think this reads a little different.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From jack@oratrix.nl  Wed Mar 21 12:57:53 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 13:57:53 +0100
Subject: [Python-Dev] test_coercion failing
In-Reply-To: Message by "Tim Peters" <tim.one@home.com> ,
 Tue, 20 Mar 2001 22:33:15 -0500 , <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <20010321125753.9D98B36B2C1@snelboot.oratrix.nl>

> Everyone!  Run this program under current CVS:
> 
> x = 0.0
> print "%.17g" % -x
> print "%+.17g" % -x
> 
> What do you get?  WinTel prints "0" for the first and "+0" for the second.

Macintosh: -0 for both.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From thomas@xs4all.net  Wed Mar 21 13:07:04 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Wed, 21 Mar 2001 14:07:04 +0100
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.22504.605383.113425@mace.lucasdigital.com>; from tommy@ilm.com on Tue, Mar 20, 2001 at 11:37:00PM -0800
References: <15032.22504.605383.113425@mace.lucasdigital.com>
Message-ID: <20010321140704.R29286@xs4all.nl>

On Tue, Mar 20, 2001 at 11:37:00PM -0800, Flying Cougar Burnette wrote:

> ------------%< snip %<----------------------%< snip %<------------

> test_pty
> The actual stdout doesn't match the expected stdout.
> This much did match (between asterisk lines):
> **********************************************************************
> test_pty
> **********************************************************************
> Then ...
> We expected (repr): 'I'
> But instead we got: '\n'
> test test_pty failed -- Writing: '\n', expected: 'I'
> 
> 
> importing test_pty into an interactive interpreter gives this:
> 
> Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
> Type "copyright", "credits" or "license" for more information.
> >>> import test.test_pty
> Calling master_open()
> Got master_fd '4', slave_name '/dev/ttyq6'
> Calling slave_open('/dev/ttyq6')
> Got slave_fd '5'
> Writing to slave_fd
> 
> I wish to buy a fish license.For my pet fish, Eric.
> calling pty.fork()
> Waiting for child (16654) to finish.
> Child (16654) exited with status 1024.
> >>> 

Hmm. This is probably my test that is a bit gaga. It tries to test the pty
module, but since I can't find any guarantees on how pty's should work, it
probably relies on platform-specific accidents. It does the following:

---
TEST_STRING_1 = "I wish to buy a fish license."
TEST_STRING_2 = "For my pet fish, Eric."

[..]

debug("Writing to slave_fd")
os.write(slave_fd, TEST_STRING_1) # should check return value
print os.read(master_fd, 1024)

os.write(slave_fd, TEST_STRING_2[:5])
os.write(slave_fd, TEST_STRING_2[5:])
print os.read(master_fd, 1024)
---

Apparently, irix buffers the first write somewhere. Can you test if the
following works better:

---
TEST_STRING_1 = "I wish to buy a fish license.\n"
TEST_STRING_2 = "For my pet fish, Eric.\n"

[..]

debug("Writing to slave_fd")
os.write(slave_fd, TEST_STRING_1) # should check return value
sys.stdout.write(os.read(master_fd, 1024))

os.write(slave_fd, TEST_STRING_2[:5])
os.write(slave_fd, TEST_STRING_2[5:])
sys.stdout.write(os.read(master_fd, 1024))
---

(There should be no need to regenerate the output file, but if it still
fails on the same spot, try running it in verbose and see if you still have
the blank line after 'writing to slave_fd'.)

Note that the pty module is working fine, it's just the test that is screwed
up. Out of curiosity, is the test_openpty test working, or is it skipped ?

I see I also need to fix some other stuff in there, but I'll wait with that
until I hear that this works better :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From jack@oratrix.nl  Wed Mar 21 13:30:32 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 14:30:32 +0100
Subject: [Python-Dev] test_coercion failing
In-Reply-To: Message by Guido van Rossum <guido@digicool.com> ,
 Tue, 20 Mar 2001 16:16:32 -0500 , <200103202116.QAA01770@cj20424-a.reston1.va.home.com>
Message-ID: <20010321133032.9906836B2C1@snelboot.oratrix.nl>

It turns out that even simple things like 0j/2 return -0.0.

The culprit appears to be the statement
    r.imag = (a.imag - a.real*ratio) / denom;
in c_quot(), line 108.

The inner part is translated into a PPC multiply-subtract instruction
	fnmsub   fp0, fp1, fp31, fp0
Or, in other words, this computes "0.0 - (2.0 * 0.0)". The result of this is 
apparently -0.0. This sounds reasonable to me, or is this against IEEE754 
rules (or C99 rules?).

If this is all according to 754 rules the one puzzle remaining is why other 
754 platforms don't see the same thing. Could it be that the combined 
multiply-subtract skips a rounding step that separate multiply and subtract 
instructions would take? My floating point knowledge is pretty basic, so 
please enlighten me....
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From guido@digicool.com  Wed Mar 21 14:36:49 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 09:36:49 -0500
Subject: [Python-Dev] Editor sought for Quick Python Book 2nd ed.
Message-ID: <200103211436.JAA04108@cj20424-a.reston1.va.home.com>

The publisher of the Quick Python Book has approached me looking for
an editor for the second edition.  Anybody interested?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From uche.ogbuji@fourthought.com  Wed Mar 21 14:42:04 2001
From: uche.ogbuji@fourthought.com (Uche Ogbuji)
Date: Wed, 21 Mar 2001 07:42:04 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation
In-Reply-To: Message from Christian Tismer <tismer@tismer.com>
 of "Wed, 21 Mar 2001 13:52:05 +0100." <3AB8A3F5.D79F7AD8@tismer.com>
Message-ID: <200103211442.HAA21574@localhost.localdomain>

> > At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> > pretty clear arithmetic progression from those who wanted to use microthreads,
> > to those who wanted co-routines, to those who wanted just generators.  The
> > generator folks were probably 2/3 of the assembly.  Looks as if many have
> > decided, and they seem to agree with you.
> 
> Here the exact facts of the poll:
> 
>      microthreads: 26
>      co-routines:  35
>      generators:   44
> 
> I think this reads a little different.

Either you're misreading me or I'm misreading you, because your facts seem to 
*exactly* corroborate what I said.  26 -> 35 -> 44 is pretty much an 
arithmetic progression, and it's exactly in the direction I mentioned 
(microthreads -> co-routines -> generators), so what difference do you see?

Of course my 2/3 number is a guess.  60 - 70 total people in the room strikes 
my memory rightly.  Anyone else?


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji@fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python




From skip@pobox.com (Skip Montanaro)  Wed Mar 21 14:46:51 2001
From: skip@pobox.com (Skip Montanaro) (Skip Montanaro)
Date: Wed, 21 Mar 2001 08:46:51 -0600 (CST)
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
 <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <15032.48859.744374.786895@beluga.mojam.com>

    Tim> Everyone!  Run this program under current CVS:
    Tim> x = 0.0
    Tim> print "%.17g" % -x
    Tim> print "%+.17g" % -x

    Tim> What do you get?

% ./python
Python 2.1b2 (#2, Mar 21 2001, 08:43:16) 
[GCC 2.95.3 19991030 (prerelease)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

% ldd ./python
        libpthread.so.0 => /lib/libpthread.so.0 (0x4001a000)
        libdl.so.2 => /lib/libdl.so.2 (0x4002d000)
        libutil.so.1 => /lib/libutil.so.1 (0x40031000)
        libm.so.6 => /lib/libm.so.6 (0x40034000)
        libc.so.6 => /lib/libc.so.6 (0x40052000)
        /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

libc appears to actually be GNU libc 2.1.3.


From tismer@tismer.com  Wed Mar 21 14:52:14 2001
From: tismer@tismer.com (Christian Tismer)
Date: Wed, 21 Mar 2001 15:52:14 +0100
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <200103211442.HAA21574@localhost.localdomain>
Message-ID: <3AB8C01E.867B9C5C@tismer.com>


Uche Ogbuji wrote:
> 
> > > At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> > > pretty clear arithmetic progression from those who wanted to use microthreads,
> > > to those who wanted co-routines, to those who wanted just generators.  The
> > > generator folks were probably 2/3 of the assembly.  Looks as if many have
> > > decided, and they seem to agree with you.
> >
> > Here the exact facts of the poll:
> >
> >      microthreads: 26
> >      co-routines:  35
> >      generators:   44
> >
> > I think this reads a little different.
> 
> Either you're misreading me or I'm misreading you, because your facts seem to
> *exactly* corroborate what I said.  26 -> 35 -> 44 is pretty much an
> arithmetic progression, and it's exactly in the direction I mentioned
> (microthreads -> co-routines -> generators), so what difference do you see?
> 
> Of course my 2/3 number is a guess.  60 - 70 total people in the room strikes
> my memory rightly.  Anyone else?

You are right, I was misunderstanding you. I thought 2/3rd of
all votes were in favor of generators, while my picture
is "most want generators, but the others are of comparable
interest".

sorry - ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer@tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/


From mwh21@cam.ac.uk  Wed Mar 21 15:39:40 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 15:39:40 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: "Tim Peters"'s message of "Tue, 20 Mar 2001 11:01:21 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEONJGAA.tim.one@home.com>
Message-ID: <m3vgp3f7wj.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one@home.com> writes:

> [Michael Hudson]
> >>> Maybe you could do the check for resize *after* the call to
> >>> insertdict?  I think that would work, but I wouldn't like to go
> >>> messing with such a performance critical bit of code without some
> >>> careful thinking.
> 
> [Guido]
> >> No, that could still decide to resize, couldn't it?
> 
> [Michael]
> > Yes, but not when you're inserting on a key that is already in the
> > dictionary - because the resize would have happened when the key was
> > inserted into the dictionary, and thus the problem we're seeing here
> > wouldn't happen.
> 
> Careful:  this comment is only half the truth:
> 
> 	/* if fill >= 2/3 size, double in size */

Yes, that could be clearer.  I was confused by the distinction between
ma_used and ma_fill for a bit.

> The dictresize following is also how dicts *shrink*.  That is, build
> up a dict, delete a whole bunch of keys, and nothing at all happens
> to the size until you call setitem again (actually, I think you need
> to call it more than once -- the behavior is tricky).

Well, as I read it, if you delete a bunch of keys and then insert the
same keys again (as in pybench's SimpleDictManipulation), no resize
will happen because ma_fill will be unaffected.  A resize will only
happen if you fill up enough slots to get the 

    mp->ma_fill*3 >= mp->ma_size*2

to trigger.

> In any case, that a key is already in the dict does not guarantee
> that a dict won't resize (via shrinking) when doing a setitem.

Yes.  But I still think that the patch I posted here (the one that
checks for resize after the call to insertdict in PyDict_SetItem)
yesterday will suffice; even if you've deleted a bunch of keys,
ma_fill will be unaffected by the deletes so the size check before the
insertdict won't be triggered (becasue it wasn't by the one after the
call to insertdict in the last call to setitem) and neither will the
size check after the call to insertdict won't be triggered (because
you're inserting on a key already in the dictionary and so ma_fill
will be unchagned).  But this is mighty fragile; something more
explicit is almost certainly a good idea.

So someone should either

> bite the bullet and add a new PyDict_AdjustSize function, just
> duplicating the resize logic.  

or just put a check in PyDict_Next, or outlaw this practice and fix
the places that do it.  And then document the conclusion.  And do it
before 2.1b2 on Friday.  I'll submit a patch, unless you're very
quick.

> Delicate, though.

Uhh, I'd say so.

Cheers,
M.

-- 
 Very clever implementation techniques are required to implement this
 insanity correctly and usefully, not to mention that code written
 with this feature used and abused east and west is exceptionally
 exciting to debug.       -- Erik Naggum on Algol-style "call-by-name"



From jeremy@alum.mit.edu  Wed Mar 21 15:51:28 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 10:51:28 -0500 (EST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>
References: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
 <LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>
Message-ID: <15032.52736.537333.260718@w221.z064000254.bwi-md.dsl.cnc.net>

On the subject of keyword preferences, I like yield best because I
first saw iterators (Icon's generators) in CLU and CLU uses yield.

Jeremy


From jeremy@alum.mit.edu  Wed Mar 21 15:56:35 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 10:56:35 -0500 (EST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.22504.605383.113425@mace.lucasdigital.com>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
Message-ID: <15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>

The test_symtable crash is a shallow one.  There's a dependency
between a .h file and the extension module that isn't captured in the
setup.py.  I think you can delete _symtablemodule.o and rebuild -- or
do a make clean.  It should work then.

Jeremy


From tommy@ilm.com  Wed Mar 21 17:02:48 2001
From: tommy@ilm.com (Flying Cougar Burnette)
Date: Wed, 21 Mar 2001 09:02:48 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
 <15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15032.57011.412823.462175@mace.lucasdigital.com>

That did it.  thanks!

Jeremy Hylton writes:
| The test_symtable crash is a shallow one.  There's a dependency
| between a .h file and the extension module that isn't captured in the
| setup.py.  I think you can delete _symtablemodule.o and rebuild -- or
| do a make clean.  It should work then.
| 
| Jeremy


From tommy@ilm.com  Wed Mar 21 17:08:49 2001
From: tommy@ilm.com (Flying Cougar Burnette)
Date: Wed, 21 Mar 2001 09:08:49 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <20010321140704.R29286@xs4all.nl>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
 <20010321140704.R29286@xs4all.nl>
Message-ID: <15032.57243.391141.409534@mace.lucasdigital.com>

Hey Thomas,

with these changes to test_pty.py I now get:

test_pty
The actual stdout doesn't match the expected stdout.
This much did match (between asterisk lines):
**********************************************************************
test_pty
**********************************************************************
Then ...
We expected (repr): 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
But instead we got: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n'
test test_pty failed -- Writing: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n', expected: 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'

but when I import test.test_pty that blank line is gone.  Sounds like
the test verification just needs to be a bit more flexible, maybe?

test_openpty passes without a problem, BTW.



Thomas Wouters writes:
| On Tue, Mar 20, 2001 at 11:37:00PM -0800, Flying Cougar Burnette wrote:
| 
| > ------------%< snip %<----------------------%< snip %<------------
| 
| > test_pty
| > The actual stdout doesn't match the expected stdout.
| > This much did match (between asterisk lines):
| > **********************************************************************
| > test_pty
| > **********************************************************************
| > Then ...
| > We expected (repr): 'I'
| > But instead we got: '\n'
| > test test_pty failed -- Writing: '\n', expected: 'I'
| > 
| > 
| > importing test_pty into an interactive interpreter gives this:
| > 
| > Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
| > Type "copyright", "credits" or "license" for more information.
| > >>> import test.test_pty
| > Calling master_open()
| > Got master_fd '4', slave_name '/dev/ttyq6'
| > Calling slave_open('/dev/ttyq6')
| > Got slave_fd '5'
| > Writing to slave_fd
| > 
| > I wish to buy a fish license.For my pet fish, Eric.
| > calling pty.fork()
| > Waiting for child (16654) to finish.
| > Child (16654) exited with status 1024.
| > >>> 
| 
| Hmm. This is probably my test that is a bit gaga. It tries to test the pty
| module, but since I can't find any guarantees on how pty's should work, it
| probably relies on platform-specific accidents. It does the following:
| 
| ---
| TEST_STRING_1 = "I wish to buy a fish license."
| TEST_STRING_2 = "For my pet fish, Eric."
| 
| [..]
| 
| debug("Writing to slave_fd")
| os.write(slave_fd, TEST_STRING_1) # should check return value
| print os.read(master_fd, 1024)
| 
| os.write(slave_fd, TEST_STRING_2[:5])
| os.write(slave_fd, TEST_STRING_2[5:])
| print os.read(master_fd, 1024)
| ---
| 
| Apparently, irix buffers the first write somewhere. Can you test if the
| following works better:
| 
| ---
| TEST_STRING_1 = "I wish to buy a fish license.\n"
| TEST_STRING_2 = "For my pet fish, Eric.\n"
| 
| [..]
| 
| debug("Writing to slave_fd")
| os.write(slave_fd, TEST_STRING_1) # should check return value
| sys.stdout.write(os.read(master_fd, 1024))
| 
| os.write(slave_fd, TEST_STRING_2[:5])
| os.write(slave_fd, TEST_STRING_2[5:])
| sys.stdout.write(os.read(master_fd, 1024))
| ---
| 
| (There should be no need to regenerate the output file, but if it still
| fails on the same spot, try running it in verbose and see if you still have
| the blank line after 'writing to slave_fd'.)
| 
| Note that the pty module is working fine, it's just the test that is screwed
| up. Out of curiosity, is the test_openpty test working, or is it skipped ?
| 
| I see I also need to fix some other stuff in there, but I'll wait with that
| until I hear that this works better :)
| 
| -- 
| Thomas Wouters <thomas@xs4all.net>
| 
| Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From barry@digicool.com  Wed Mar 21 17:40:21 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Wed, 21 Mar 2001 12:40:21 -0500
Subject: [Python-Dev] PEP 1, PEP Purpose and Guidelines
Message-ID: <15032.59269.4520.961715@anthem.wooz.org>

With everyone feeling so PEPpy lately (yay!) I thought it was time to
do an updating pass through PEP 1.  Attached below is the latest copy,
also available (as soon as uploading is complete) via

    http://python.sourceforge.net/peps/pep-0001.html

Note the addition of the Replaces: and Replaced-By: headers for
formalizing the PEP replacement policy (thanks to Andrew Kuchling for
the idea and patch).

Enjoy,
-Barry

-------------------- snip snip --------------------
PEP: 1
Title: PEP Purpose and Guidelines
Version: $Revision: 1.16 $
Author: barry@digicool.com (Barry A. Warsaw),
    jeremy@digicool.com (Jeremy Hylton)
Status: Draft
Type: Informational
Created: 13-Jun-2000
Post-History: 21-Mar-2001


What is a PEP?

    PEP stands for Python Enhancement Proposal.  A PEP is a design
    document providing information to the Python community, or
    describing a new feature for Python.  The PEP should provide a
    concise technical specification of the feature and a rationale for
    the feature.

    We intend PEPs to be the primary mechanisms for proposing new
    features, for collecting community input on an issue, and for
    documenting the design decisions that have gone into Python.  The
    PEP author is responsible for building consensus within the
    community and documenting dissenting opinions.

    Because the PEPs are maintained as plain text files under CVS
    control, their revision history is the historical record of the
    feature proposal[1].
    

Kinds of PEPs

    There are two kinds of PEPs.  A standards track PEP describes a
    new feature or implementation for Python.  An informational PEP
    describes a Python design issue, or provides general guidelines or
    information to the Python community, but does not propose a new
    feature.


PEP Work Flow

    The PEP editor, Barry Warsaw <barry@digicool.com>, assigns numbers
    for each PEP and changes its status.

    The PEP process begins with a new idea for Python.  Each PEP must
    have a champion -- someone who writes the PEP using the style and
    format described below, shepherds the discussions in the
    appropriate forums, and attempts to build community consensus
    around the idea.  The PEP champion (a.k.a. Author) should first
    attempt to ascertain whether the idea is PEP-able.  Small
    enhancements or patches often don't need a PEP and can be injected
    into the Python development work flow with a patch submission to
    the SourceForge patch manager[2] or feature request tracker[3].

    The PEP champion then emails the PEP editor with a proposed title
    and a rough, but fleshed out, draft of the PEP.  This draft must
    be written in PEP style as described below.

    If the PEP editor approves, he will assign the PEP a number, label
    it as standards track or informational, give it status 'draft',
    and create and check-in the initial draft of the PEP.  The PEP
    editor will not unreasonably deny a PEP.  Reasons for denying PEP
    status include duplication of effort, being technically unsound,
    or not in keeping with the Python philosophy.  The BDFL
    (Benevolent Dictator for Life, Guido van Rossum
    <guido@python.org>) can be consulted during the approval phase,
    and is the final arbitrator of the draft's PEP-ability.

    The author of the PEP is then responsible for posting the PEP to
    the community forums, and marshaling community support for it.  As
    updates are necessary, the PEP author can check in new versions if
    they have CVS commit permissions, or can email new PEP versions to
    the PEP editor for committing.

    Standards track PEPs consists of two parts, a design document and
    a reference implementation.  The PEP should be reviewed and
    accepted before a reference implementation is begun, unless a
    reference implementation will aid people in studying the PEP.
    Standards Track PEPs must include an implementation - in the form
    of code, patch, or URL to same - before it can be considered
    Final.

    PEP authors are responsible for collecting community feedback on a
    PEP before submitting it for review.  A PEP that has not been
    discussed on python-list@python.org and/or python-dev@python.org
    will not be accepted.  However, wherever possible, long open-ended
    discussions on public mailing lists should be avoided.  A better
    strategy is to encourage public feedback directly to the PEP
    author, who collects and integrates the comments back into the
    PEP.

    Once the authors have completed a PEP, they must inform the PEP
    editor that it is ready for review.  PEPs are reviewed by the BDFL
    and his chosen consultants, who may accept or reject a PEP or send
    it back to the author(s) for revision.

    Once a PEP has been accepted, the reference implementation must be
    completed.  When the reference implementation is complete and
    accepted by the BDFL, the status will be changed to `Final.'

    A PEP can also be assigned status `Deferred.'  The PEP author or
    editor can assign the PEP this status when no progress is being
    made on the PEP.  Once a PEP is deferred, the PEP editor can
    re-assign it to draft status.

    A PEP can also be `Rejected'.  Perhaps after all is said and done
    it was not a good idea.  It is still important to have a record of
    this fact.

    PEPs can also be replaced by a different PEP, rendering the
    original obsolete.  This is intended for Informational PEPs, where
    version 2 of an API can replace version 1.

    PEP work flow is as follows:

        Draft -> Accepted -> Final -> Replaced
          ^
          +----> Rejected
          v
        Deferred

    Some informational PEPs may also have a status of `Active' if they
    are never meant to be completed.  E.g. PEP 1.


What belongs in a successful PEP?

    Each PEP should have the following parts:

    1. Preamble -- RFC822 style headers containing meta-data about the
       PEP, including the PEP number, a short descriptive title, the
       names contact info for each author, etc.

    2. Abstract -- a short (~200 word) description of the technical
       issue being addressed.

    3. Copyright/public domain -- Each PEP must either be explicitly
       labelled in the public domain or the Open Publication
       License[4].

    4. Specification -- The technical specification should describe
       the syntax and semantics of any new language feature.  The
       specification should be detailed enough to allow competing,
       interoperable implementations for any of the current Python
       platforms (CPython, JPython, Python .NET).

    5. Rationale -- The rationale fleshes out the specification by
       describing what motivated the design and why particular design
       decisions were made.  It should describe alternate designs that
       were considered and related work, e.g. how the feature is
       supported in other languages.

       The rationale should provide evidence of consensus within the
       community and discuss important objections or concerns raised
       during discussion.

    6. Reference Implementation -- The reference implementation must
       be completed before any PEP is given status 'Final,' but it
       need not be completed before the PEP is accepted.  It is better
       to finish the specification and rationale first and reach
       consensus on it before writing code.

       The final implementation must include test code and
       documentation appropriate for either the Python language
       reference or the standard library reference.


PEP Style

    PEPs are written in plain ASCII text, and should adhere to a
    rigid style.  There is a Python script that parses this style and
    converts the plain text PEP to HTML for viewing on the web[5].

    Each PEP must begin with an RFC822 style header preamble.  The
    headers must appear in the following order.  Headers marked with
    `*' are optional and are described below.  All other headers are
    required.

        PEP: <pep number>
        Title: <pep title>
        Version: <cvs version string>
        Author: <list of authors' email and real name>
      * Discussions-To: <email address>
        Status: <Draft | Active | Accepted | Deferred | Final | Replaced>
        Type: <Informational | Standards Track>
        Created: <date created on, in dd-mmm-yyyy format>
      * Python-Version: <version number>
        Post-History: <dates of postings to python-list and python-dev>
      * Replaces: <pep number>
      * Replaced-By: <pep number>

    Standards track PEPs must have a Python-Version: header which
    indicates the version of Python that the feature will be released
    with.  Informational PEPs do not need a Python-Version: header.

    While a PEP is in private discussions (usually during the initial
    Draft phase), a Discussions-To: header will indicate the mailing
    list or URL where the PEP is being discussed.  No Discussions-To:
    header is necessary if the PEP is being discussed privately with
    the author, or on the python-list or python-dev email mailing
    lists.

    PEPs may also have a Replaced-By: header indicating that a PEP has
    been rendered obsolete by a later document; the value is the
    number of the PEP that replaces the current document.  The newer
    PEP must have a Replaces: header containing the number of the PEP
    that it rendered obsolete.

    PEP headings must begin in column zero and the initial letter of
    each word must be capitalized as in book titles.  Acronyms should
    be in all capitals.  The body of each section must be indented 4
    spaces.  Code samples inside body sections should be indented a
    further 4 spaces, and other indentation can be used as required to
    make the text readable.  You must use two blank lines between the
    last line of a section's body and the next section heading.

    Tab characters must never appear in the document at all.  A PEP
    should include the Emacs stanza included by example in this PEP.

    A PEP must contain a Copyright section, and it is strongly
    recommended to put the PEP in the public domain.

    You should footnote any URLs in the body of the PEP, and a PEP
    should include a References section with those URLs expanded.


References and Footnotes

    [1] This historical record is available by the normal CVS commands
    for retrieving older revisions.  For those without direct access
    to the CVS tree, you can browse the current and past PEP revisions
    via the SourceForge web site at

    http://cvs.sourceforge.net/cgi-bin/cvsweb.cgi/python/nondist/peps/?cvsroot=python

    [2] http://sourceforge.net/tracker/?group_id=5470&atid=305470

    [3] http://sourceforge.net/tracker/?atid=355470&group_id=5470&func=browse

    [4] http://www.opencontent.org/openpub/

    [5] The script referred to here is pep2html.py, which lives in
    the same directory in the CVS tree as the PEPs themselves.  Try
    "pep2html.py --help" for details.

    The URL for viewing PEPs on the web is
    http://python.sourceforge.net/peps/


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:


From m.favas@per.dem.csiro.au  Wed Mar 21 19:44:30 2001
From: m.favas@per.dem.csiro.au (Mark Favas)
Date: Thu, 22 Mar 2001 03:44:30 +0800
Subject: [Python-Dev] test_coercion failing
Message-ID: <3AB9049E.7331F570@per.dem.csiro.au>

[Tim searches for -0's]
On Tru64 Unix (4.0F) with Compaq's C compiler I get:
Python 2.1b2 (#344, Mar 22 2001, 03:18:25) [C] on osf1V4
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

and on Solaris 8 (Sparc) with gcc I get:
Python 2.1b2 (#23, Mar 22 2001, 03:25:27) 
[GCC 2.95.2 19991024 (release)] on sunos5
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

while on FreeBSD 4.2 with gcc I get:
Python 2.1b2 (#3, Mar 22 2001, 03:36:19) 
[GCC 2.95.2 19991024 (release)] on freebsd4
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
0
>>> print "%+.17g" % -x
+0

-- 
Mark Favas  -   m.favas@per.dem.csiro.au
CSIRO, Private Bag No 5, Wembley, Western Australia 6913, AUSTRALIA


From tim.one@home.com  Wed Mar 21 20:18:54 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 15:18:54 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010321133032.9906836B2C1@snelboot.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEOJHAA.tim.one@home.com>

[Jack Jansen]
> It turns out that even simple things like 0j/2 return -0.0.
>
> The culprit appears to be the statement
>     r.imag = (a.imag - a.real*ratio) / denom;
> in c_quot(), line 108.
>
> The inner part is translated into a PPC multiply-subtract instruction
> 	fnmsub   fp0, fp1, fp31, fp0
> Or, in other words, this computes "0.0 - (2.0 * 0.0)". The result
> of this is apparently -0.0. This sounds reasonable to me, or is
> this against IEEE754 rules (or C99 rules?).

I've said it twice, but I'll say it once more <wink>:  under 754 rules,

   (+0) - (+0)

must return +0 in all rounding modes except for (the exceedingly unlikely, as
it's not the default) to-minus-infinity rounding mode.  The latter case is
the only case in which it should return -0.  Under the default
to-nearest/even rounding mode, and under the to-plus-infinity and to-0
rounding modes, +0 is the required result.

However, we don't know whether a.imag is +0 or -0 on your box; it *should* be
+0.  If it were -0, then

   (-0) - (+0)

should indeed be -0 under default 754 rules.  So this still needs to be
traced back.  That is, when you say it computes ""0.0 - (2.0 * 0.0)", there
are four *possible* things that could mean, depending on the signs of the
zeroes.  As is, I'm afraid we still don't know enough to say whether the -0
result is due to an unexpected -0 as one the inputs.

> If this is all according to 754 rules the one puzzle remaining is
> why other 754 platforms don't see the same thing.

Because the antecedent is wrong:  the behavior you're seeing violates 754
rules (unless you've somehow managed to ask for to-minus-infinity rounding,
or you're getting -0 inputs for bogus reasons).

Try this:

    print repr(1.0 - 1e-100)

If that doesn't display "1.0", but something starting "0.9999"..., then
you've somehow managed to get to-minus-infinity rounding.

Another thing to try:

    print 2+0j

Does that also come out as "2-0j" for you?

What about:

    print repr((0j).real), repr((0j).imag)

?  (I'm trying to see whether -0 parts somehow get invented out of thin air.)

> Could it be that the combined multiply-subtract skips a rounding
> step that separate multiply and subtract instructions would take? My
> floating point knowledge is pretty basic, so please enlighten me....

I doubt this has anything to do with the fused mul-sub.  That operation isn't
defined as such by 754, but it would be a mondo serious hardware bug if it
didn't operate on endcase values the same way as separate mul-then-sub.
OTOH, the new complex division algorithm may generate a fused mul-sub in
places where the old algorithm did not, so I can't rule that out either.

BTW, most compilers for boxes with fused mul-add have a switch to disable
generating the fused instructions.  Might want to give that a try (if you
have such a switch, it may mask the symptom but leave the cause unknown).



From tim.one@home.com  Wed Mar 21 20:45:09 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 15:45:09 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
Message-ID: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>

When running the full test suite, test_doctest fails (in current CVS; did not
fail yesterday).  This was on Windows.  Other platforms?

Does not fail in isolation.  Doesn't matter whether or not .pyc files are
deleted first, and doesn't matter whether a regular or debug build of Python
is used.

In four runs of the full suite with regrtest -r (randomize test order),
test_doctest failed twice and passed twice.  So it's unlikely this has
something specifically to do with doctest.

roll-out-the-efence?-ly y'rs  - tim



From jeremy@alum.mit.edu  Wed Mar 21 20:41:53 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 15:41:53 -0500 (EST)
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
Message-ID: <15033.4625.822632.276247@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "TP" == Tim Peters <tim.one@home.com> writes:

  TP> In four runs of the full suite with regrtest -r (randomize test
  TP> order), test_doctest failed twice and passed twice.  So it's
  TP> unlikely this has something specifically to do with doctest.

How does doctest fail?  Does that give any indication of the nature of
the problem?  Does it fail with a core dump (or whatever Windows does
instead)?  Or is the output wrong?

Jeremy


From guido@digicool.com  Wed Mar 21 21:01:12 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 16:01:12 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Your message of "Wed, 21 Mar 2001 15:45:09 EST."
 <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
Message-ID: <200103212101.QAA11781@cj20424-a.reston1.va.home.com>

> When running the full test suite, test_doctest fails (in current CVS; did not
> fail yesterday).  This was on Windows.  Other platforms?
> 
> Does not fail in isolation.  Doesn't matter whether or not .pyc files are
> deleted first, and doesn't matter whether a regular or debug build of Python
> is used.
> 
> In four runs of the full suite with regrtest -r (randomize test order),
> test_doctest failed twice and passed twice.  So it's unlikely this has
> something specifically to do with doctest.

Last time we had something like this it was a specific dependency
between two test modules, where if test_A was imported before test_B,
things were fine, but in the other order one of them would fail.

I noticed that someone (Jeremy?) checked in a whole slew of changes to
test modules, including test_support.  I also noticed that stuff was
added to test_support that would show up if you did "from test_support
import *".  I believe previously this was intended to only export a
small number of things; now it exports more, e.g. unittest, os, and
sys.  But that doesn't look like it would make much of a difference.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mwh21@cam.ac.uk  Wed Mar 21 21:03:40 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 21:03:40 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
Message-ID: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one@home.com> writes:

> When running the full test suite, test_doctest fails (in current CVS; did not
> fail yesterday).  This was on Windows.  Other platforms?

Yes.  Linux.

I'm getting:

We expected (repr): 'doctest.Tester.runstring.__doc__'
But instead we got: 'doctest.Tester.summarize.__doc__'

> Does not fail in isolation.  

Indeed.

How does doctest order it's tests?  I bet the changes just made to
dictobject.c make the order of dict.items() slightly unpredictable
(groan).

Cheers,
M.

-- 
81. In computing, turning the obvious into the useful is a living
    definition of the word "frustration".
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From jeremy@alum.mit.edu  Wed Mar 21 20:54:05 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 15:54:05 -0500 (EST)
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
 <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15033.5357.471974.18878@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MWH" == Michael Hudson <mwh21@cam.ac.uk> writes:

  MWH> "Tim Peters" <tim.one@home.com> writes:
  >> When running the full test suite, test_doctest fails (in current
  >> CVS; did not fail yesterday).  This was on Windows.  Other
  >> platforms?

  MWH> Yes.  Linux.

Interesting.  I've done four runs (-r) and not seen any errors on my
Linux box.  Maybe I'm just unlucky.

Jeremy


From tim.one@home.com  Wed Mar 21 21:13:14 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 16:13:14 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <15033.4625.822632.276247@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFEJHAA.tim.one@home.com>

[Jeremy]
> How does doctest fail?  Does that give any indication of the nature of
> the problem?  Does it fail with a core dump (or whatever Windows does
> instead)?  Or is the output wrong?

Sorry, I should know better than to say "doesn't work".  It's that the output
is wrong:

It's good up through the end of this section of output:

...
1 items had failures:
   1 of   2 in XYZ
4 tests in 2 items.
3 passed and 1 failed.
***Test Failed*** 1 failures.
(1, 4)
ok
0 of 6 examples failed in doctest.Tester.__doc__
Running doctest.Tester.__init__.__doc__
0 of 0 examples failed in doctest.Tester.__init__.__doc__
Running doctest.Tester.run__test__.__doc__
0 of 0 examples failed in doctest.Tester.run__test__.__doc__
Running


But then:

We expected (repr): 'doctest.Tester.runstring.__doc__'
But instead we got: 'doctest.Tester.summarize.__doc__'


Hmm!  Perhaps doctest is merely running sub-tests in a different order.
doctest uses whatever order dict.items() returns (for the module __dict__ and
class __dict__s, etc).  It should probably force the order.  I'm going to get
something to eat and ponder that ... if true, The Mystery is how the internal
dicts could get *built* in a different order across runs ...

BTW, does or doesn't a run of the full test suite complain here too under
your Linux box?



From tim.one@home.com  Wed Mar 21 21:17:39 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 16:17:39 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEFFJHAA.tim.one@home.com>

[Michael Hudson]
> Yes.  Linux.
>
> I'm getting:
>
> We expected (repr): 'doctest.Tester.runstring.__doc__'
> But instead we got: 'doctest.Tester.summarize.__doc__'

Same thing, then (Jeremy, *don't* use -r).

>> Does not fail in isolation.

> Indeed.

> How does doctest order it's tests?  I bet the changes just made to
> dictobject.c make the order of dict.items() slightly unpredictable
> (groan).

As just posted, doctest uses whatever .items() returns but probably
shouldn't.  It's hard to see how the dictobject.c changes could affect that,
but I have to agree they're the most likley suspect.  I'll back those out
locally and see whether the problem persists.

But I'm going to eat first!



From michel@digicool.com  Wed Mar 21 21:44:29 2001
From: michel@digicool.com (Michel Pelletier)
Date: Wed, 21 Mar 2001 13:44:29 -0800 (PST)
Subject: [Python-Dev] PEP 245: Python Interfaces
Message-ID: <Pine.LNX.4.32.0103211340050.25303-100000@localhost.localdomain>

Barry has just checked in PEP 245 for me.

http://python.sourceforge.net/peps/pep-0245.html

I'd like to open up the discussion phase on this PEP to anyone who is
interested in commenting on it.  I'm not sure of the proper forum, it has
been discussed to some degree on the types-sig.

Thanks,

-Michel



From mwh21@cam.ac.uk  Wed Mar 21 22:01:15 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 22:01:15 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
References: <LNBBLJKPBEHFEDALKOLCOEFFJHAA.tim.one@home.com>
Message-ID: <m3elvqg4t0.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one@home.com> writes:

> [Michael Hudson]
> > Yes.  Linux.
> >
> > I'm getting:
> >
> > We expected (repr): 'doctest.Tester.runstring.__doc__'
> > But instead we got: 'doctest.Tester.summarize.__doc__'
> 
> Same thing, then (Jeremy, *don't* use -r).
> 
> >> Does not fail in isolation.
> 
> > Indeed.
> 
> > How does doctest order it's tests?  I bet the changes just made to
> > dictobject.c make the order of dict.items() slightly unpredictable
> > (groan).
> 
> As just posted, doctest uses whatever .items() returns but probably
> shouldn't.  It's hard to see how the dictobject.c changes could
> affect that, but I have to agree they're the most likley suspect.

> I'll back those out locally and see whether the problem persists.

Fixes things here.

Oooh, look at this:

$ ../../python 
Python 2.1b2 (#3, Mar 21 2001, 21:29:14) 
[GCC 2.95.1 19990816/Linux (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import doctest
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', '_Tester__record_outcome', 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge', 'rundoc', '__module__']
>>> doctest.testmod(doctest)
(0, 53)
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', 'summarize', '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc', '_Tester__record_outcome', '__module__']

Indeed:

$ ../../python 
Python 2.1b2 (#3, Mar 21 2001, 21:29:14) 
[GCC 2.95.1 19990816/Linux (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import doctest
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', '_Tester__record_outcome', 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge', 'rundoc', '__module__']
>>> doctest.Tester.__dict__['__doc__'] = doctest.Tester.__dict__['__doc__']
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', 'summarize', '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc', '_Tester__record_outcome', '__module__']

BUT, and this is where I give up:

    This has always happened!  It even happens with Python 1.5.2!

it just makes a difference now.  So maybe it's something else entirely.

Cheers,
M.

-- 
  MARVIN:  Do you want me to sit in a corner and rust, or just fall
           apart where I'm standing?
                    -- The Hitch-Hikers Guide to the Galaxy, Episode 2



From tim.one@home.com  Wed Mar 21 22:30:52 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 17:30:52 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3elvqg4t0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>

[Michael Hudson]
> Oooh, look at this:
>
> $ ../../python
> Python 2.1b2 (#3, Mar 21 2001, 21:29:14)
> [GCC 2.95.1 19990816/Linux (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import doctest
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', '_Tester__record_outcome',
> 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge',
> 'rundoc', '__module__']
> >>> doctest.testmod(doctest)
> (0, 53)
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', 'summarize',
> '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc',
> '_Tester__record_outcome', '__module__']

Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
since the dict has 11 items, it's exactly at the boundary where PyDict_Next
will now resize it.

> Indeed:
>
> $ ../../python
> Python 2.1b2 (#3, Mar 21 2001, 21:29:14)
> [GCC 2.95.1 19990816/Linux (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import doctest
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', '_Tester__record_outcome',
> 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge',
> 'rundoc', '__module__']
> >>> doctest.Tester.__dict__['__doc__'] = doctest.Tester.__dict__['__doc__']
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', 'summarize',
> '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc',
> '_Tester__record_outcome', '__module__']
>
> BUT, and this is where I give up:
>
>     This has always happened!  It even happens with Python 1.5.2!

Yes, but in this case you did an explicit setitem, and PyDict_SetItem *will*
resize it (because it started with 11 entries:  11*3 >= 16*2, but 10*3 <
16*2).  Nothing has changed there in many years.

> it just makes a difference now.  So maybe it's something else entirely.

Well, nobody should rely on the order of dict.items().  Curiously, doctest
actually doesn't, but the order of its verbose-mode *output* blocks changes,
and it's the regrtest.py framework that cares about that.

I'm calling this one a bug in doctest.py, and will fix it there.  Ugly:
since we can longer rely on list.sort() not raising exceptions, it won't be
enough to replace the existing

    for k, v in dict.items():

with

    items = dict.items()
    items.sort()
    for k, v in items:

I guess

    keys = dict.keys()
    keys.sort()
    for k in keys:
        v = dict[k]

is the easiest safe alternative (these are namespace dicts, btw, so it's
certain the keys are all strings).

thanks-for-the-help!-ly y'rs  - tim



From guido@digicool.com  Wed Mar 21 22:36:13 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 17:36:13 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Your message of "Wed, 21 Mar 2001 17:30:52 EST."
 <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>
Message-ID: <200103212236.RAA12977@cj20424-a.reston1.va.home.com>

> Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
> since the dict has 11 items, it's exactly at the boundary where PyDict_Next
> will now resize it.

It *could* be the garbage collector.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mwh21@cam.ac.uk  Wed Mar 21 23:24:33 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 23:24:33 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Guido van Rossum's message of "Wed, 21 Mar 2001 17:36:13 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com> <200103212236.RAA12977@cj20424-a.reston1.va.home.com>
Message-ID: <m3ae6eg0y6.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido@digicool.com> writes:

> > Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
> > since the dict has 11 items, it's exactly at the boundary where PyDict_Next
> > will now resize it.
> 
> It *could* be the garbage collector.

I think it would have to be; there just aren't that many calls to
PyDict_Next around.  I confused myself by thinking that calling keys()
called PyDict_Next, but it doesn't.

glad-that-one's-sorted-out-ly y'rs
M.

-- 
  "The future" has arrived but they forgot to update the docs.
                                        -- R. David Murray, 9 May 2000



From greg@cosc.canterbury.ac.nz  Thu Mar 22 01:37:00 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Mar 2001 13:37:00 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <3AB87C4E.450723C2@lemburg.com>
Message-ID: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal@lemburg.com>:

> XXX The functions here don't copy the resource fork or other metadata on Mac.

Wouldn't it be better to fix these functions on the Mac
instead of depriving everyone else of them?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From greg@cosc.canterbury.ac.nz  Thu Mar 22 01:39:05 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Mar 2001 13:39:05 +1200 (NZST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <012601c0b1d8$7dc3cc50$e46940d5@hagrid>
Message-ID: <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <fredrik@effbot.org>:

> I associate "yield" with non-preemptive threading (yield
> to anyone else, not necessarily my caller).

Well, this flavour of generators is sort of a special case
subset of non-preemptive threading, so the usage is not
entirely inconsistent.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tim.one@home.com  Thu Mar 22 01:41:02 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 21 Mar 2001 20:41:02 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <15032.22433.953503.130175@mace.lucasdigital.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEGKJHAA.tim.one@home.com>

[Flying Cougar Burnette]
> I get the same ("0" then "+0") on my irix65 O2.  test_coerce succeeds
> as well.

Tommy, it's great to hear that Irix screws up signed-zero output too!  The
two computer companies I own stock in are SGI and Microsoft.  I'm sure this
isn't a coincidence <wink>.

i'll-use-linux-when-it-gets-rid-of-those-damn-sign-bits-ly y'rs  - tim



From represearch@yahoo.com  Wed Mar 21 19:46:00 2001
From: represearch@yahoo.com (reptile research)
Date: Wed, 21 Mar 2001 19:46:00
Subject: [Python-Dev] (no subject)
Message-ID: <E14fu8l-0000lc-00@mail.python.org>


From nhodgson@bigpond.net.au  Thu Mar 22 02:07:28 2001
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Thu, 22 Mar 2001 13:07:28 +1100
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
References: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
Message-ID: <034601c0b274$d8bab8c0$8119fea9@neil>

Greg Ewing:
> "M.-A. Lemburg" <mal@lemburg.com>:
>
> > XXX The functions here don't copy the resource fork or other metadata on
Mac.
>
> Wouldn't it be better to fix these functions on the Mac
> instead of depriving everyone else of them?

   Then they should be fixed for Windows as well where they don't copy
secondary forks either. While not used much by native code, forks are
commonly used on NT servers which serve files to Macintoshes.

   There is also the issue of other metadata. Should shutil optionally copy
ownership information? Access Control Lists? Summary information? A really
well designed module here could be very useful but quite some work.

   Neil



From nhodgson@bigpond.net.au  Thu Mar 22 02:14:22 2001
From: nhodgson@bigpond.net.au (Neil Hodgson)
Date: Thu, 22 Mar 2001 13:14:22 +1100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
References: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz><LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com> <15032.52736.537333.260718@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <035801c0b275$cf667510$8119fea9@neil>

Jeremy Hylton:

> On the subject of keyword preferences, I like yield best because I
> first saw iterators (Icon's generators) in CLU and CLU uses yield.

   For me the benefit of "yield" is that it connotes both transfer of value
and transfer of control, just like "return", while "suspend" only connotes
transfer of control.

   "This tree yields 20 Kilos of fruit each year" and "When merging, yield
to the vehicles to your right".

   Neil



From barry@digicool.com  Thu Mar 22 03:16:30 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Wed, 21 Mar 2001 22:16:30 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
References: <3AB87C4E.450723C2@lemburg.com>
 <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
Message-ID: <15033.28302.876972.730118@anthem.wooz.org>

>>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:

    GE> Wouldn't it be better to fix these functions on the Mac
    GE> instead of depriving everyone else of them?

Either way, shutil sure is useful!


From MarkH@ActiveState.com  Thu Mar 22 05:16:09 2001
From: MarkH@ActiveState.com (Mark Hammond)
Date: Thu, 22 Mar 2001 16:16:09 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPOEKKDGAA.MarkH@ActiveState.com>

I have submitted patch #410465 for this.

http://sourceforge.net/tracker/?func=detail&aid=410465&group_id=5470&atid=30
5470

Comments are in the patch, so I won't repeat them here, but I would
appreciate a few reviews on the code.  Particularly, my addition of a new
format to PyArg_ParseTuple and the resulting extra string copy may raise a
few eye-brows.

I've even managed to include the new test file and its output in the patch,
so it will hopefully apply cleanly and run a full test if you want to try
it.

Thanks,

Mark.



From nas@arctrix.com  Thu Mar 22 05:44:32 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Wed, 21 Mar 2001 21:44:32 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Mar 20, 2001 at 01:31:49AM -0500
References: <20010319084534.A18938@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>
Message-ID: <20010321214432.A25810@glacier.fnational.com>

[Tim on comparing fringes of two trees]:
> In Icon you need to create co-expressions to solve this
> problem, because its generators aren't explicitly resumable,
> and Icon has no way to spell "kick a pair of generators in
> lockstep".  But explicitly resumable generators are in fact
> "good enough" for this classic example, which is usually used
> to motivate coroutines.

Apparently they are good for lots of other things too.  Tonight I
implemented passing values using resume().  Next, I decided to
see if I had enough magic juice to tackle the coroutine example
from Gordon's stackless tutorial.  Its turns out that I didn't
need the extra functionality.  Generators are enough.

The code is not too long so I've attached it.  I figure that some
people might need a break from 2.1 release issues.  I think the
generator version is even simpler than the coroutine version.

  Neil

# Generator example:
# The program is a variation of a Simula 67 program due to Dahl & Hoare,
# who in turn credit the original example to Conway.
#
# We have a number of input lines, terminated by a 0 byte.  The problem
# is to squash them together into output lines containing 72 characters
# each.  A semicolon must be added between input lines.  Runs of blanks
# and tabs in input lines must be squashed into single blanks.
# Occurrences of "**" in input lines must be replaced by "^".
#
# Here's a test case:

test = """\
   d    =   sqrt(b**2  -  4*a*c)
twoa    =   2*a
   L    =   -b/twoa
   R    =   d/twoa
  A1    =   L + R
  A2    =   L - R\0
"""

# The program should print:
# d = sqrt(b^2 - 4*a*c);twoa = 2*a; L = -b/twoa; R = d/twoa; A1 = L + R;
#A2 = L - R
#done
# getlines: delivers the input lines
# disassemble: takes input line and delivers them one
#    character at a time, also inserting a semicolon into
#    the stream between lines
# squasher:  takes characters and passes them on, first replacing
#    "**" with "^" and squashing runs of whitespace
# assembler: takes characters and packs them into lines with 72
#    character each; when it sees a null byte, passes the last
#    line to putline and then kills all the coroutines

from Generator import Generator

def getlines(text):
    g = Generator()
    for line in text.split('\n'):
        g.suspend(line)
    g.end()

def disassemble(cards):
    g = Generator()
    try:
        for card in cards:
            for i in range(len(card)):
                if card[i] == '\0':
                    raise EOFError 
                g.suspend(card[i])
            g.suspend(';')
    except EOFError:
        pass
    while 1:
        g.suspend('') # infinite stream, handy for squash()

def squash(chars):
    g = Generator()
    while 1:
        c = chars.next()
        if not c:
            break
        if c == '*':
            c2 = chars.next()
            if c2 == '*':
                c = '^'
            else:
                g.suspend(c)
                c = c2
        if c in ' \t':
            while 1:
                c2 = chars.next()
                if c2 not in ' \t':
                    break
            g.suspend(' ')
            c = c2
        if c == '\0':
            g.end()
        g.suspend(c)
    g.end()

def assemble(chars):
    g = Generator()
    line = ''
    for c in chars:
        if c == '\0':
            g.end()
        if len(line) == 72:
            g.suspend(line)
            line = ''
        line = line + c
    line = line + ' '*(72 - len(line))
    g.suspend(line)
    g.end()


if __name__ == '__main__':
    for line in assemble(squash(disassemble(getlines(test)))):
        print line
    print 'done'

        


From cce@clarkevans.com  Thu Mar 22 10:14:25 2001
From: cce@clarkevans.com (Clark C. Evans)
Date: Thu, 22 Mar 2001 05:14:25 -0500 (EST)
Subject: [Python-Dev] Re: PEP 1, PEP Purpose and Guidelines
In-Reply-To: <15032.59269.4520.961715@anthem.wooz.org>
Message-ID: <Pine.LNX.4.21.0103220504280.18700-100000@clarkevans.com>

Barry,

  If you don't mind, I'd like to apply for one of them
  there PEP numbers.  Sorry for not following the guidelines,
  it won't happen again.

  Also, I believe that this isn't just my work, but rather
  a first pass at concensus on this issue via the vocal and
  silent feeback from those on the main and type special
  interest group.  I hope that I have done their ideas
  and feedback justice (if not, I'm sure I'll hear about it).

Thank you so much,

Clark

...

PEP: XXX
Title: Protocol Checking and Adaptation
Version: $Revision$
Author: Clark Evans
Python-Version: 2.2
Status: Draft
Type: Standards Track
Created: 21-Mar-2001
Updated: 23-Mar-2001

Abstract

    This proposal puts forth a built-in, explicit method for
    the adaptation (including verification) of an object to a 
    context where a specific type, class, interface, or other 
    protocol is expected.  This proposal can leverage existing
    protocols such as the type system and class hierarchy and is
    orthogonal, if not complementary to the pending interface
    mechanism [1] and signature based type-checking system [2]

    This proposal allows an object to answer two questions.  First,
    are you a such and such?  Meaning, does this object have a 
    particular required behavior?  And second, if not, can you give
    me a handle which is?  Meaning, can the object construct an 
    appropriate wrapper object which can provide compliance with
    the protocol expected.  This proposal does not limit what 
    such and such (the protocol) is or what compliance to that
    protocol means, and it allows other query/adapter techniques 
    to be added later and utilized through the same interface 
    and infrastructure introduced here.

Motivation

    Currently there is no standardized mechanism in Python for 
    asking if an object supports a particular protocol. Typically,
    existence of particular methods, particularly those that are 
    built-in such as __getitem__, is used as an indicator of 
    support for a particular protocol.  This technique works for 
    protocols blessed by GvR, such as the new enumerator proposal
    identified by a new built-in __iter__.  However, this technique
    does not admit an infallible way to identify interfaces lacking 
    a unique, built-in signature method.

    More so, there is no standardized way to obtain an adapter 
    for an object.  Typically, with objects passed to a context
    expecting a particular protocol, either the object knows about 
    the context and provides its own wrapper or the context knows 
    about the object and automatically wraps it appropriately.  The 
    problem with this approach is that such adaptations are one-offs,
    are not centralized in a single place of the users code, and 
    are not executed with a common technique, etc.  This lack of
    standardization increases code duplication with the same 
    adapter occurring in more than one place or it encourages 
    classes to be re-written instead of adapted.  In both cases,
    maintainability suffers.

    In the recent type special interest group discussion [3], there
    were two complementary quotes which motivated this proposal:

       "The deep(er) part is whether the object passed in thinks of
        itself as implementing the Foo interface. This means that
        its author has (presumably) spent at least a little time
        about the invariants that a Foo should obey."  GvR [4]

    and

       "There is no concept of asking an object which interface it
        implements. There is no "the" interface it implements. It's
        not even a set of interfaces, because the object doesn't 
        know them in advance. Interfaces can be defined after objects
        conforming to them are created." -- Marcin Kowalczyk [5]

    The first quote focuses on the intent of a class, including 
    not only the existence of particular methods, but more 
    importantly the call sequence, behavior, and other invariants.
    Where the second quote focuses on the type signature of the
    class.  These quotes highlight a distinction between interface
    as a "declarative, I am a such-and-such" construct, as opposed
    to a "descriptive, It looks like a such-and-such" mechanism.

    Four positive cases for code-reuse include:

     a) It is obvious object has the same protocol that
        the context expects.  This occurs when the type or
        class expected happens to be the type of the object
        or class.  This is the simple and easiest case.

     b) When the object knows about the protocol that the
        context requires and knows how to adapt itself 
        appropriately.  Perhaps it already has the methods
        required, or it can make an appropriate wrapper

     c) When the protocol knows about the object and can
        adapt it on behalf of the context.  This is often
        the case with backwards-compatibility cases.

     d) When the context knows about the object and the 
        protocol and knows how to adapt the object so that
        the required protocol is satisfied.

    This proposal should allow each of these cases to be handled,
    however, the proposal only concentrates on the first two cases;
    leaving the latter two cases where the protocol adapts the 
    object and where the context adapts the object to other proposals.
    Furthermore, this proposal attempts to enable these four cases
    in a manner completely neutral to type checking or interface
    declaration and enforcement proposals.  

Specification

    For the purposes of this specification, let the word protocol
    signify any current or future method of stating requirements of 
    an object be it through type checking, class membership, interface 
    examination, explicit types, etc.  Also let the word compliance
    be dependent and defined by each specific protocol.

    This proposal initially supports one initial protocol, the
    type/class membership as defined by isinstance(object,protocol)
    Other types of protocols, such as interfaces can be added through
    another proposal without loss of generality of this proposal.  
    This proposal attempts to keep the first set of protocols small
    and relatively unobjectionable.

    This proposal would introduce a new binary operator "isa".
    The left hand side of this operator is the object to be checked
    ("self"), and the right hand side is the protocol to check this
    object against ("protocol").  The return value of the operator 
    will be either the left hand side if the object complies with 
    the protocol or None.

    Given an object and a protocol, the adaptation of the object is:
     a) self, if the object is already compliant with the protocol,
     b) a secondary object ("wrapper"), which provides a view of the
        object compliant with the protocol.  This is explicitly 
        vague, and wrappers are allowed to maintain their own 
        state as necessary.
     c) None, if the protocol is not understood, or if object 
        cannot be verified compliant with the protocol and/or
        if an appropriate wrapper cannot be constructed.

    Further, a new built-in function, adapt, is introduced.  This
    function takes two arguments, the object being adapted ("obj") 
    and the protocol requested of the object ("protocol").  This
    function returns the adaptation of the object for the protocol,
    either self, a wrapper, or None depending upon the circumstances.
    None may be returned if adapt does not understand the protocol,
    or if adapt cannot verify compliance or create a wrapper.

    For this machinery to work, two other components are required.
    First is a private, shared implementation of the adapt function
    and isa operator.  This private routine will have three 
    arguments: the object being adapted ("self"), the protocol 
    requested ("protocol"), and a flag ("can_wrap").  The flag
    specifies if the adaptation may be a wrapper, if the flag is not
    set, then the adaptation may only be self or None.  This flag is
    required to support the isa operator.  The obvious case 
    mentioned in the motivation, where the object easily complies 
    with the protocol, is implemented in this private routine.  

    To enable the second case mentioned in the motivation, when 
    the object knows about the protocol, a new method slot, __adapt__
    on each object is required.  This optional slot takes three
    arguments, the object being adapted ("self"), the protocol 
    requested ("protocol"), and a flag ("can_wrap").  And, like 
    the other functions, must return an adaptation, be it self, a
    wrapper if allowed, or None.  This method slot allows a class 
    to declare which protocols it supports in addition to those 
    which are part of the obvious case.

    This slot is called first before the obvious cases are examined, 
    if None is returned then the default processing proceeds.  If the
    default processing is wrong, then the AdaptForceNoneException
    can be thrown.  The private routine will catch this specific 
    exception and return None in this case.  This technique allows an
    class to subclass another class, but yet catch the cases where 
    it is considered as a substitutable for the base class.  Since 
    this is the exception, rather than the normal case, an exception 
    is warranted and is used to pass this information along.  The 
    caller of adapt or isa will be unaware of this particular exception
    as the private routine will return None in this particular case.

    Please note two important things.  First, this proposal does not
    preclude the addition of other protocols.  Second, this proposal 
    does not preclude other possible cases where adapter pattern may
    hold, such as the protocol knowing the object or the context 
    knowing the object and the protocol (cases c and d in the 
    motivation).  In fact, this proposal opens the gate for these 
    other mechanisms to be added; while keeping the change in 
    manageable chunks.

Reference Implementation and Example Usage

    -----------------------------------------------------------------
    adapter.py
    -----------------------------------------------------------------
        import types
        AdaptForceNoneException = "(private error for adapt and isa)"

        def interal_adapt(obj,protocol,can_wrap):

            # the obj may have the answer, so ask it about the ident
            adapt = getattr(obj, '__adapt__',None)
            if adapt:
                try:
                    retval = adapt(protocol,can_wrap)
                    # todo: if not can_wrap check retval for None or obj
                except AdaptForceNoneException:
                    return None
                if retval: return retval

            # the protocol may have the answer, so ask it about the obj
            pass

            # the context may have the answer, so ask it about the
            pass

            # check to see if the current object is ok as is
            if type(protocol) is types.TypeType or \
               type(protocol) is types.ClassType:
                if isinstance(obj,protocol):
                    return obj

            # ok... nothing matched, so return None
            return None

        def adapt(obj,protocol):
            return interal_adapt(obj,protocol,1)

        # imagine binary operator syntax
        def isa(obj,protocol):
            return interal_adapt(obj,protocol,0)

    -----------------------------------------------------------------
    test.py
    -----------------------------------------------------------------
        from adapter import adapt
        from adapter import isa
        from adapter import AdaptForceNoneException

        class KnightsWhoSayNi: pass  # shrubbry troubles

        class EggsOnly:  # an unrelated class/interface
            def eggs(self,str): print "eggs!" + str

        class HamOnly:  # used as an interface, no inhertance
            def ham(self,str): pass
            def _bugger(self): pass  # irritating a private member

        class SpamOnly: # a base class, inheritance used
            def spam(self,str): print "spam!" + str

        class EggsSpamAndHam (SpamOnly,KnightsWhoSayNi):
            def ham(self,str): print "ham!" + str
            def __adapt__(self,protocol,can_wrap):
                if protocol is HamOnly:
                    # implements HamOnly implicitly, no _bugger
                    return self
                if protocol is KnightsWhoSayNi:
                    # we are no longer the Knights who say Ni!
                    raise AdaptForceNoneException
                if protocol is EggsOnly and can_wrap:
                    # Knows how to create the eggs!
                    return EggsOnly()

        def test():
            x = EggsSpamAndHam()
            adapt(x,SpamOnly).spam("Ni!")
            adapt(x,EggsOnly).eggs("Ni!")
            adapt(x,HamOnly).ham("Ni!")
            adapt(x,EggsSpamAndHam).ham("Ni!")
            if None is adapt(x,KnightsWhoSayNi): print "IckIcky...!"
            if isa(x,SpamOnly): print "SpamOnly"
            if isa(x,EggsOnly): print "EggsOnly"
            if isa(x,HamOnly): print "HamOnly"
            if isa(x,EggsSpamAndHam): print "EggsAndSpam"
            if isa(x,KnightsWhoSayNi): print "NightsWhoSayNi"

    -----------------------------------------------------------------
    Example Run
    -----------------------------------------------------------------
        >>> import test
        >>> test.test()
        spam!Ni!
        eggs!Ni!
        ham!Ni!
        ham!Ni!
        IckIcky...!
        SpamOnly
        HamOnly
        EggsAndSpam

Relationship To Paul Prescod and Tim Hochbergs Type Assertion method

    The example syntax Paul put forth recently [2] was:

        interface Interface
            def __check__(self,obj)

    Pauls proposal adds the checking part to the third (3)
    case described in motiviation, when the protocol knows
    about the object.  As stated, this could be easily added
    as a step in the interal_adapt function:

            # the protocol may have the answer, so ask it about the obj

                if typ is types.Interface:
                    if typ__check__(obj):
                        return obj

    Further, and quite excitingly, if the syntax for this type 
    based assertion added an extra argument, "can_wrap", then this
    mechanism could be overloaded to also provide adapters to
    objects that the interface knows about.

    In short, the work put forth by Paul and company is great, and
    I dont see any problems why these two proposals couldnt work
    together in harmony, if not be completely complementary.

Relationship to Python Interfaces [1] by Michel Pelletier

    The relationship to this proposal is a bit less clear 
    to me, although an implements(obj,anInterface) built-in
    function was mentioned.  Thus, this could be added naively
    as a step in the interal_adapt function:

        if typ is types.Interface:
            if implements(obj,protocol):
                return obj

    However, there is a clear concern here.  Due to the 
    tight semantics being described in this specification,
    it is clear the isa operator proposed would have to have 
    a 1-1 correspondence with implements function, when the
    type of protocol is an Interface.  Thus, when can_wrap is
    true, __adapt__ may be called, however, it is clear that
    the return value would have to be double-checked.  Thus, 
    a more realistic change would be more like:

        def internal_interface_adapt(obj,interface)
            if implements(obj,interface):
                return obj
            else
                return None

        def interal_adapt(obj,protocol,can_wrap):

            # the obj may have the answer, so ask it about the ident
            adapt = getattr(obj, '__adapt__',None)
            if adapt:
                try:
                    retval = adapt(protocol,can_wrap)
                except AdaptForceNoneException:
                    if type(protocol) is types.Interface:
                        return internal_interface_adapt(obj,protocol)
                    else:
                        return None
                if retval: 
                    if type(protocol) is types.Interface:
                        if can_wrap and implements(retval,protocol):
                            return retval
                        return internal_interface_adapt(obj,protocol)
                    else:
                        return retval

            if type(protocol) is types.Interface:
                return internal_interface_adapt(obj,protocol)

            # remainder of function... 

    It is significantly more complicated, but doable.

Relationship To Iterator Proposal:
 
    The iterator special interest group is proposing a new built-in
    called "__iter__", which could be replaced with __adapt__ if an
    an Interator class is introduced.  Following is an example.

        class Iterator:
            def next(self):
                raise IndexError

        class IteratorTest:
            def __init__(self,max):
                self.max = max
            def __adapt__(self,protocol,can_wrap):
                if protocol is Iterator and can_wrap:
                    class IteratorTestIterator(Iterator):
                        def __init__(self,max):
                            self.max = max
                            self.count = 0
                        def next(self):
                            self.count = self.count + 1
                            if self.count < self.max:
                              return self.count
                            return Iterator.next(self)
                    return IteratorTestIterator(self.max)

Relationships To Microsofts Query Interface:

    Although this proposal may sounds similar to Microsofts 
    QueryInterface, it differs by a number of aspects.  First, 
    there is not a special "IUnknown" interface which can be used
    for object identity, although this could be proposed as one
    of those "special" blessed interface protocol identifiers.
    Second, with QueryInterface, once an object supports a particular
    interface it must always there after support this interface; 
    this proposal makes no such guarantee, although this may be 
    added at a later time. Third, implementations of Microsofts
    QueryInterface must support a kind of equivalence relation. 
    By reflexive they mean the querying an interface for itself 
    must always succeed.  By symmetrical they mean that if one 
    can successfully query an interface IA for a second interface 
    IB, then one must also be able to successfully query the 
    interface IB for IA.  And finally, by transitive they mean if 
    one can successfully query IA for IB and one can successfully
    query IB for IC, then one must be able to successfully query 
    IA for IC.  Ability to support this type of equivalence relation
    should be encouraged, but may not be possible.  Further research 
    on this topic (by someone familiar with Microsoft COM) would be
    helpful in further determining how compatible this proposal is.

Backwards Compatibility

    There should be no problem with backwards compatibility.  
    Indeed this proposal, save an built-in adapt() function, 
    could be tested without changes to the interpreter.

Questions and Answers

    Q:  Why was the name changed from __query__ to __adapt__ ?  

    A:  It was clear that significant QueryInterface assumptions were
        being laid upon the proposal, when the intent was more of an 
        adapter.  Of course, if an object does not need to be adapted
        then it can be used directly and this is the basic premise.

    Q:  Why is the checking mechansim mixed with the adapter
        mechanism.

    A:  Good question.  They could be seperated, however, there
        is significant overlap, if you consider the checking
        protocol as returning a compliant object (self) or
        not a compliant object (None).  In this way, adapting
        becomes a special case of checking, via the can_wrap.

        Really, this could be seperated out, but the two 
        concepts are very related so much duplicate work
        would be done, and the overall mechanism would feel
        quite a bit less unified.

    Q:  This is just a type-coercion proposal.

    A:  No. Certainly it could be used for type-coercion, such
        coercion would be explicit via __adapt__ or adapt function. 
        Of course, if this was used for iterator interface, then the
        for construct may do an implicit __adapt__(Iterator) but
        this would be an exception rather than the rule.

    Q:  Why did the author write this PEP?

    A:  He wanted a simple proposal that covered the "deep part" of
        interfaces without getting tied up in signature woes.  Also, it
        was clear that __iter__ proposal put forth is just an example
        of this type of interface.  Further, the author is doing XML 
        based client server work, and wants to write generic tree based
        algorithms that work on particular interfaces and would
        like these algorithms to be used by anyone willing to make
        an "adapter" having the interface required by the algorithm.

    Q:  Is this in opposition to the type special interest group?

    A:  No.  It is meant as a simple, need based solution that could
        easily complement the efforts by that group.

    Q:  Why was the identifier changed from a string to a class?

    A:  This was done on Michel Pelletiers suggestion.  This mechanism
        appears to be much cleaner than the DNS string proposal, which 
        caused a few eyebrows to rise.  

    Q:  Why not handle the case where instances are used to identify 
        protocols?  In other words, 6 isa 6 (where the 6 on the right
        is promoted to an types.Int

    A:  Sounds like someone might object, lets keep this in a
        separate proposal.

    Q:  Why not let obj isa obj be true?  or class isa baseclass?

    A:  Sounds like someone might object, lets keep this in a
        separate proposal.

    Q:  It seems that a reverse lookup could be used, why not add this?

    A:  There are many other lookup and/or checking mechanisms that
        could be used here.  However, the goal of this PEP is to be 
        small and sweet ... having any more functionality would make
        it more objectionable to some people.  However, this proposal
        was designed in large part to be completely orthogonal to other
        methods, so these mechanisms can be added later if needed

Credits

    This proposal was created in large part by the feedback 
    of the talented individuals on both the main mailing list
    and also the type signature list.  Specific contributors
    include (sorry if I missed someone).

        Robin Thomas, Paul Prescod, Michel Pelletier, 
        Alex Martelli, Jeremy Hylton, Carlos Ribeiro,
        Aahz Maruch, Fredrik Lundh, Rainer Deyke,
        Timothy Delaney, and Huaiyu Zhu

Copyright

    This document has been placed in the public domain.


References and Footnotes

    [1] http://python.sourceforge.net/peps/pep-0245.html
    [2] http://mail.python.org/pipermail/types-sig/2001-March/001223.html
    [3] http://www.zope.org/Members/michel/types-sig/TreasureTrove
    [4] http://mail.python.org/pipermail/types-sig/2001-March/001105.html
    [5] http://mail.python.org/pipermail/types-sig/2001-March/001206.html
    [6] http://mail.python.org/pipermail/types-sig/2001-March/001223.html




From thomas@xs4all.net  Thu Mar 22 11:14:48 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 12:14:48 +0100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Mar 22, 2001 at 01:39:05PM +1200
References: <012601c0b1d8$7dc3cc50$e46940d5@hagrid> <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>
Message-ID: <20010322121448.T29286@xs4all.nl>

On Thu, Mar 22, 2001 at 01:39:05PM +1200, Greg Ewing wrote:
> Fredrik Lundh <fredrik@effbot.org>:

> > I associate "yield" with non-preemptive threading (yield
> > to anyone else, not necessarily my caller).

> Well, this flavour of generators is sort of a special case
> subset of non-preemptive threading, so the usage is not
> entirely inconsistent.

I prefer yield, but I'll yield to suspend as long as we get coroutines or
suspendable frames so I can finish my Python-embedded MUX with
task-switching Python code :-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@digicool.com  Thu Mar 22 13:51:16 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 08:51:16 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: Your message of "Wed, 21 Mar 2001 22:16:30 EST."
 <15033.28302.876972.730118@anthem.wooz.org>
References: <3AB87C4E.450723C2@lemburg.com> <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
 <15033.28302.876972.730118@anthem.wooz.org>
Message-ID: <200103221351.IAA25632@cj20424-a.reston1.va.home.com>

> >>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:
> 
>     GE> Wouldn't it be better to fix these functions on the Mac
>     GE> instead of depriving everyone else of them?
> 
> Either way, shutil sure is useful!

Yes, but deceptively so.  What should we do?  Anyway, it doesn't
appear to be officially deprecated yet (can't see it in the docs) and
I think it may be best to keep it that way.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From pf@artcom-gmbh.de  Thu Mar 22 14:17:46 2001
From: pf@artcom-gmbh.de (Peter Funk)
Date: Thu, 22 Mar 2001 15:17:46 +0100 (MET)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103221351.IAA25632@cj20424-a.reston1.va.home.com> from Guido van Rossum at "Mar 22, 2001  8:51:16 am"
Message-ID: <m14g5uN-000CnEC@artcom0.artcom-gmbh.de>

Hi,

Guido van Rossum schrieb:
> > >>>>> "GE" == Greg Ewing <greg@cosc.canterbury.ac.nz> writes:
> > 
> >     GE> Wouldn't it be better to fix these functions on the Mac
> >     GE> instead of depriving everyone else of them?
> > 
> > Either way, shutil sure is useful!
> 
> Yes, but deceptively so.  What should we do?  Anyway, it doesn't
> appear to be officially deprecated yet (can't see it in the docs) and
> I think it may be best to keep it that way.

A very simple idea would be, to provide two callback hooks,
which will be invoked by each call to copyfile or remove.

Example:  Someone uses the package netatalk on Linux to provide file
services to Macs.  netatalk stores the resource forks in hidden sub
directories called .AppleDouble.  The callback function could than
copy the .AppleDouble/files around using shutil.copyfile itself.

Regards, Peter



From fredrik@effbot.org  Thu Mar 22 14:37:59 2001
From: fredrik@effbot.org (Fredrik Lundh)
Date: Thu, 22 Mar 2001 15:37:59 +0100
Subject: [Python-Dev] booted from sourceforge
Message-ID: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>

attempts to access the python project, the tracker (etc) results in:

    You don't have permission to access <whatever> on this server.

is it just me?

Cheers /F



From thomas@xs4all.net  Thu Mar 22 14:44:29 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 15:44:29 +0100
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.57243.391141.409534@mace.lucasdigital.com>; from tommy@ilm.com on Wed, Mar 21, 2001 at 09:08:49AM -0800
References: <15032.22504.605383.113425@mace.lucasdigital.com> <20010321140704.R29286@xs4all.nl> <15032.57243.391141.409534@mace.lucasdigital.com>
Message-ID: <20010322154429.W27808@xs4all.nl>

On Wed, Mar 21, 2001 at 09:08:49AM -0800, Flying Cougar Burnette wrote:

> with these changes to test_pty.py I now get:

> test_pty
> The actual stdout doesn't match the expected stdout.
> This much did match (between asterisk lines):
> **********************************************************************
> test_pty
> **********************************************************************
> Then ...
> We expected (repr): 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
> But instead we got: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n'
> test test_pty failed -- Writing: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n', expected: 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
> 
> but when I import test.test_pty that blank line is gone.  Sounds like
> the test verification just needs to be a bit more flexible, maybe?

Yes... I'll explicitly turn \r\n into \n (at the end of the string) so the
test can still use the normal print/stdout-checking routines (mostly because
I want to avoid doing the error reporting myself) but it would still barf if
the read strings contain other trailing garbage or extra whitespace and
such.

I'll check in a new version in a few minutes.. Let me know if it still has
problems.

> test_openpty passes without a problem, BTW.

Good... so at least that works ;-)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From thomas@xs4all.net  Thu Mar 22 14:45:57 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 15:45:57 +0100
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>; from fredrik@effbot.org on Thu, Mar 22, 2001 at 03:37:59PM +0100
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
Message-ID: <20010322154557.A13066@xs4all.nl>

On Thu, Mar 22, 2001 at 03:37:59PM +0100, Fredrik Lundh wrote:
> attempts to access the python project, the tracker (etc) results in:

>     You don't have permission to access <whatever> on this server.

> is it just me?

I noticed this yesterday as well, but only for a few minutes. I wasn't on SF
for long, though, so I might have hit it again if I'd tried once more. I
suspect they are/were commissioning a new (set of) webserver(s) in the pool,
and they screwed up the permissions.

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From guido@digicool.com  Thu Mar 22 14:55:37 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 09:55:37 -0500
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: Your message of "Thu, 22 Mar 2001 15:37:59 +0100."
 <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
Message-ID: <200103221455.JAA25875@cj20424-a.reston1.va.home.com>

> attempts to access the python project, the tracker (etc) results in:
> 
>     You don't have permission to access <whatever> on this server.
> 
> is it just me?
> 
> Cheers /F

No, it's SF.  From their most recent mailing (this morning!) to the
customer:

"""The good news is, it is unlikely SourceForge.net will have any
power related downtime.  In December we moved the site to Exodus, and
they have amble backup power systems to deal with the on going
blackouts."""

So my expectation that it's a power failure -- system folks are
notoriously optimistic about the likelihood of failures... :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake@acm.org  Thu Mar 22 14:57:47 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Thu, 22 Mar 2001 09:57:47 -0500 (EST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103221351.IAA25632@cj20424-a.reston1.va.home.com>
References: <3AB87C4E.450723C2@lemburg.com>
 <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
 <15033.28302.876972.730118@anthem.wooz.org>
 <200103221351.IAA25632@cj20424-a.reston1.va.home.com>
Message-ID: <15034.4843.674513.237570@localhost.localdomain>

Guido van Rossum writes:
 > Yes, but deceptively so.  What should we do?  Anyway, it doesn't
 > appear to be officially deprecated yet (can't see it in the docs) and
 > I think it may be best to keep it that way.

  I don't think it's deceived me yet!  I see no reason to deprecate
it, and I don't recall anyone telling me it should be.  Nor do I
recall a discussion here suggesting that it should be.
  If it has hidden corners that I just haven't run into (and it *has*
been pointed out that it does have corners, at least on some
platforms), why don't we just consider those bugs that can be fixed?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From thomas@xs4all.net  Thu Mar 22 15:03:20 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 16:03:20 +0100
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: <200103221455.JAA25875@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 22, 2001 at 09:55:37AM -0500
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid> <200103221455.JAA25875@cj20424-a.reston1.va.home.com>
Message-ID: <20010322160320.B13066@xs4all.nl>

On Thu, Mar 22, 2001 at 09:55:37AM -0500, Guido van Rossum wrote:
> > attempts to access the python project, the tracker (etc) results in:
> > 
> >     You don't have permission to access <whatever> on this server.
> > 
> > is it just me?
> > 
> > Cheers /F

> [..] my expectation that it's a power failure -- system folks are
> notoriously optimistic about the likelihood of failures... :-)

It's quite uncommon for powerfailures to cause permission problems, though :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From mwh21@cam.ac.uk  Thu Mar 22 15:18:58 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 22 Mar 2001 15:18:58 +0000
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: "Fredrik Lundh"'s message of "Thu, 22 Mar 2001 15:37:59 +0100"
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
Message-ID: <m33dc5g7bx.fsf@atrus.jesus.cam.ac.uk>

"Fredrik Lundh" <fredrik@effbot.org> writes:

> attempts to access the python project, the tracker (etc) results in:
> 
>     You don't have permission to access <whatever> on this server.
> 
> is it just me?

I was getting this a lot yesterday.  Give it a minute, and try again -
worked for me, albeit somewhat tediously.

Cheers,
M.

-- 
  Just put the user directories on a 486 with deadrat7.1 and turn the
  Octane into the afforementioned beer fridge and keep it in your
  office. The lusers won't notice the difference, except that you're
  more cheery during office hours.              -- Pim van Riezen, asr



From gward@python.net  Thu Mar 22 16:50:43 2001
From: gward@python.net (Greg Ward)
Date: Thu, 22 Mar 2001 11:50:43 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: <034601c0b274$d8bab8c0$8119fea9@neil>; from nhodgson@bigpond.net.au on Thu, Mar 22, 2001 at 01:07:28PM +1100
References: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz> <034601c0b274$d8bab8c0$8119fea9@neil>
Message-ID: <20010322115043.A5993@cthulhu.gerg.ca>

On 22 March 2001, Neil Hodgson said:
>    Then they should be fixed for Windows as well where they don't copy
> secondary forks either. While not used much by native code, forks are
> commonly used on NT servers which serve files to Macintoshes.
> 
>    There is also the issue of other metadata. Should shutil optionally copy
> ownership information? Access Control Lists? Summary information? A really
> well designed module here could be very useful but quite some work.

There's a pretty good 'copy_file()' routine in the Distutils; I found
shutil quite inadequate, so rolled my own.  Jack Jansen patched it so it
does the "right thing" on Mac OS.  By now, it has probably copied many
files all over the place on all of your computers, so it sounds like it
works.  ;-)

See the distutils.file_util module for implementation and documentation.

        Greg
-- 
Greg Ward - Unix bigot                                  gward@python.net
http://starship.python.net/~gward/
Sure, I'm paranoid... but am I paranoid ENOUGH?


From fredrik@pythonware.com  Thu Mar 22 17:09:49 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Thu, 22 Mar 2001 18:09:49 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF> <3AB62EAE.FCFD7C9F@lemburg.com> <048401c0b172$dd6892a0$e46940d5@hagrid>
Message-ID: <01bd01c0b2f2$e8702fb0$e46940d5@hagrid>

> (and my plan is to make a statvfs subset available on
> all platforms, which makes your code even simpler...)

windows patch here:
http://sourceforge.net/tracker/index.php?func=detail&aid=410547&group_id=5470&atid=305470

guess it has to wait for 2.2, though...

Cheers /F



From greg@cosc.canterbury.ac.nz  Thu Mar 22 22:36:02 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 23 Mar 2001 10:36:02 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <m14g5uN-000CnEC@artcom0.artcom-gmbh.de>
Message-ID: <200103222236.KAA08215@s454.cosc.canterbury.ac.nz>

pf@artcom-gmbh.de (Peter Funk):

> netatalk stores the resource forks in hidden sub
> directories called .AppleDouble.

None of that is relevant if the copying is being done from
the Mac end. To the Mac it just looks like a normal Mac
file, so the standard Mac file-copying techniques will work.
No need for any callbacks.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From tommy@ilm.com  Thu Mar 22 23:03:29 2001
From: tommy@ilm.com (Flying Cougar Burnette)
Date: Thu, 22 Mar 2001 15:03:29 -0800 (PST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
Message-ID: <15034.33486.157946.686067@mace.lucasdigital.com>

Hey Folks,

When running an interactive interpreter python currently tries to
import "readline", ostensibly to make your interactive experience a
little easier (with history, extra keybindings, etc).  For a while now 
we python has also shipped with a standard module called "rlcompleter" 
which adds name completion to the readline functionality.

Can anyone think of a good reason why we don't import rlcompleter
instead of readline by default?  I can give you a good reason why it
*should*, but I'd rather not bore anyone with the details if I don't
have to.

All in favor, snag the following patch....


------------%< snip %<----------------------%< snip %<------------

Index: Modules/main.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Modules/main.c,v
retrieving revision 1.51
diff -r1.51 main.c
290c290
<               v = PyImport_ImportModule("readline");
---
>               v = PyImport_ImportModule("rlcompleter");


From pf@artcom-gmbh.de  Thu Mar 22 23:10:46 2001
From: pf@artcom-gmbh.de (Peter Funk)
Date: Fri, 23 Mar 2001 00:10:46 +0100 (MET)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103222236.KAA08215@s454.cosc.canterbury.ac.nz> from Greg Ewing at "Mar 23, 2001 10:36: 2 am"
Message-ID: <m14gEEA-000CnEC@artcom0.artcom-gmbh.de>

Hi,

> pf@artcom-gmbh.de (Peter Funk):
> > netatalk stores the resource forks in hidden sub
> > directories called .AppleDouble.

Greg Ewing:
> None of that is relevant if the copying is being done from
> the Mac end. To the Mac it just looks like a normal Mac
> file, so the standard Mac file-copying techniques will work.
> No need for any callbacks.

You are right and I know this.  But if you program an application,
which should work on the Unix/Linux side (for example a filemanager
or something similar), you have to pay attention to this files on
your own.  The same holds true for thumbnail images usually stored
in a .xvpics subdirectory.

All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
in this respect.

Regards, Peter
P.S.: I'm not going to write a GUI file manager in Python and using
shutil right now.  So this discussion is somewhat academic.
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)



From tim.one@home.com  Fri Mar 23 03:03:03 2001
From: tim.one@home.com (Tim Peters)
Date: Thu, 22 Mar 2001 22:03:03 -0500
Subject: [Python-Dev] CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEKNJHAA.tim.one@home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAELCJHAA.tim.one@home.com>

At work today, Guido and I both found lots of instabilities in current CVS
Python, under different flavors of Windows:  senseless errors in the test
suite, different behavior across runs, NULL-pointer errors in GC when running
under a debug-build Python, some kind of Windows "app error" alert box, and
weird complaints about missing attributes during Python shutdown.

Back at home, things *seem* much better, but I still get one of the errors I
saw at the office:  a NULL-pointer dereference in GC, using a debug-build
Python, in test_xmllib, while *compiling* xmllib.pyc (i.e., we're not
actually running the test yet, just compiling the module).  Alas, this does
not fail in isolation, it's only when a run of the whole test suite happens
to get to that point.  The error is in gc_list_remove, which is passed a node
whose left and right pointers are both NULL.

Only thing I know for sure is that it's not PyDict_Next's fault (I did a
quick run with *that* change commented out; made no difference).  That wasn't
just paranoia:  dict_traverse is two routines down the call stack when this
happens, and that uses PyDict_Next.

How's life on other platforms?  Anyone else ever build/test the debug Python?
Anyone have a hot efence/Insure raring to run?

not-picky-about-the-source-of-miracles-ly y'rs  - tim



From guido@digicool.com  Fri Mar 23 04:34:48 2001
From: guido@digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 23:34:48 -0500
Subject: [Python-Dev] Re: CVS Python is unstable
Message-ID: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>

Tim's problem can be reproduced in debug mode as follows (on Windows
as well as on Linux):

    import test.test_weakref
    import test.test_xmllib

Boom!  The debugger (on Windows) shows that it does in some GC code.

After backing out Fred's last change to _weakref.c, this works as
expected and I get no other problems.

So I propose to back out that change and be done with it.

Here's the CVS comment:

----------------------------
revision 1.8
date: 2001/03/22 18:05:30;  author: fdrake;  state: Exp;  lines: +1 -1

Inform the cycle-detector that the a weakref object no longer needs to be
tracked as soon as it is clear; this can decrease the number of roots for
the cycle detector sooner rather than later in applications which hold on
to weak references beyond the time of the invalidation.
----------------------------

And the diff, to be backed out:

*** _weakref.c	2001/02/27 18:36:56	1.7
--- _weakref.c	2001/03/22 18:05:30	1.8
***************
*** 59,64 ****
--- 59,65 ----
      if (self->wr_object != Py_None) {
          PyWeakReference **list = GET_WEAKREFS_LISTPTR(self->wr_object);
  
+         PyObject_GC_Fini((PyObject *)self);
          if (*list == self)
              *list = self->wr_next;
          self->wr_object = Py_None;
***************
*** 78,84 ****
  weakref_dealloc(PyWeakReference *self)
  {
      clear_weakref(self);
-     PyObject_GC_Fini((PyObject *)self);
      self->wr_next = free_list;
      free_list = self;
  }
--- 79,84 ----

Fred, can you explain what the intention of this code was?

It's not impossible that the bug is actually in the debug mode macros,
but I'd rather not ship code that's instable in debug mode -- that
defeats the purpose.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From tim.one@home.com  Fri Mar 23 05:10:33 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 23 Mar 2001 00:10:33 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>

[Guido]
> It's not impossible that the bug is actually in the debug mode macros,
> but I'd rather not ship code that's instable in debug mode -- that
> defeats the purpose.

I *suspect* the difference wrt debug mode is right where it's blowing up:

static void
gc_list_remove(PyGC_Head *node)
{
	node->gc_prev->gc_next = node->gc_next;
	node->gc_next->gc_prev = node->gc_prev;
#ifdef Py_DEBUG
	node->gc_prev = NULL;
	node->gc_next = NULL;
#endif
}

That is, in debug mode, the prev and next fields are nulled out, but not in
release mode.

Whenever this thing dies, the node passed in has prev and next fields that
*are* nulled out.  Since under MS debug mode, freed memory is set to a very
distinctive non-null bit pattern, this tells me that-- most likely --some
single node is getting passed to gc_list_remove *twice*.

I bet that's happening in release mode too ... hang on a second ... yup!  If
I remove the #ifdef above, then the pair test_weakref test_xmllib dies with a
null-pointer error here under the release build too.

and-that-ain't-good-ly y'rs  - tim



From tim.one@home.com  Fri Mar 23 05:56:05 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 23 Mar 2001 00:56:05 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELNJHAA.tim.one@home.com>

More info on the debug-mode

    test_weakref test_xmllib

blowup in gc_list_append, and with the .pyc files already there.

While running test_weakref, we call collect() once.

Ditto while running test_xmllib:  that's when it blows up.

collect_generations() is here (***):

	else {
		generation = 0;
		collections0++;
		if (generation0.gc_next != &generation0) {
***			n = collect(&generation0, &generation1);
		}
	}

collect() is here:

	gc_list_init(&reachable);
	move_roots(young, &reachable);
***	move_root_reachable(&reachable);

move_root_reachable is here:

***		(void) traverse(op,
			       (visitproc)visit_reachable,
			       (void *)reachable);

And that's really calling dict_traverse, which is iterating over the dict.

At blowup time, the dict key is of PyString_Type, with value "ref3", and so
presumably left over from test_weakref.  The dict value is of
PyWeakProxy_Type, has a refcount of 2, and has

    wr_object   pointing to Py_NoneStruct
    wr_callback NULL
    hash        0xffffffff
    wr_prev     NULL
    wr_next     NULL

It's dying while calling visit() (really visit_reachable) on the latter.

Inside visit_reachable, we have:

		if (gc && gc->gc_refs != GC_MOVED) {

and that's interesting too, because gc->gc_refs is 0xcdcdcdcd, which is the
MS debug-mode "clean landfill" value:  freshly malloc'ed memory is filled
with 0xcd bytes (so gc->gc_refs is uninitialized trash).

My conclusion:  it's really hosed.  Take it away, Neil <wink>!



From tim.one@home.com  Fri Mar 23 06:19:19 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 23 Mar 2001 01:19:19 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>

> So I propose to back out that change and be done with it.

I just did revert the change (rev 1.8 of _weakref.c, back to 1.7), so anyone
interested in pursuing the details should NOT update.

There's another reason for not updating then:  the problem "went away" after
the next big pile of checkins, even before I reverted the change.  I assume
that's simply because things got jiggled enough so that we no longer hit
exactly the right sequence of internal operations.



From fdrake@acm.org  Fri Mar 23 06:50:21 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 01:50:21 -0500 (EST)
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
 <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > That is, in debug mode, the prev and next fields are nulled out, but not in
 > release mode.
 > 
 > Whenever this thing dies, the node passed in has prev and next fields that
 > *are* nulled out.  Since under MS debug mode, freed memory is set to a very
 > distinctive non-null bit pattern, this tells me that-- most likely --some
 > single node is getting passed to gc_list_remove *twice*.
 > 
 > I bet that's happening in release mode too ... hang on a second ... yup!  If
 > I remove the #ifdef above, then the pair test_weakref test_xmllib dies with a
 > null-pointer error here under the release build too.

  Ok, I've been trying to keep up with all this, and playing with some
alternate patches.  The change that's been identified as causing the
problem was trying to remove the weak ref from the cycle detectors set
of known containers as soon as the ref object was no longer a
container.  When this is done by the tp_clear handler may be the
problem; the GC machinery is removing the object from the list, and
calls gc_list_remove() assuming that the object is still in the list,
but after the tp_clear handler has been called.
  I see a couple of options:

  - Document the restriction that PyObject_GC_Fini() should not be
    called on an object while it's tp_clear handler is active (more
    efficient), -or-
  - Remove the restriction (safer).

  If we take the former route, I think it is still worth removing the
weakref object from the GC list as soon as it has been cleared, in
order to keep the number of containers the GC machinery has to inspect
at a minimum.  This can be done by adding a flag to
weakref.c:clear_weakref() indicating that the object's tp_clear is
active.  The extra flag would not be needed if we took the second
option.
  Another possibility, if I do adjust the code to remove the weakref
objects from the GC list aggressively, is to only call
PyObject_GC_Init() if the weakref actually has a callback -- if there
is no callback, the weakref object does not act as a container to
begin with.
  (It is also possible that with agressive removal of the weakref
object from the set of containers, it doesn't need to implement the
tp_clear handler at all, in which case this gets just a little bit
nicer.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From nas@arctrix.com  Fri Mar 23 13:41:02 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 05:41:02 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 01:19:19AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>
Message-ID: <20010323054102.A28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 01:19:19AM -0500, Tim Peters wrote:
> There's another reason for not updating then:  the problem "went away" after
> the next big pile of checkins, even before I reverted the change.  I assume
> that's simply because things got jiggled enough so that we no longer hit
> exactly the right sequence of internal operations.

Yes.

  Neil


From nas@arctrix.com  Fri Mar 23 13:47:40 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 05:47:40 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 12:10:33AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <20010323054740.B28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 12:10:33AM -0500, Tim Peters wrote:
> I *suspect* the difference wrt debug mode is right where it's blowing up:
> 
> static void
> gc_list_remove(PyGC_Head *node)
> {
> 	node->gc_prev->gc_next = node->gc_next;
> 	node->gc_next->gc_prev = node->gc_prev;
> #ifdef Py_DEBUG
> 	node->gc_prev = NULL;
> 	node->gc_next = NULL;
> #endif
> }

PyObject_GC_Fini() should not be called twice on the same object
unless there is a PyObject_GC_Init() in between.  I suspect that
Fred's change made this happen.  When Py_DEBUG is not defined the
GC will do all sorts of strange things if you do this, hence the
debugging code.

  Neil


From nas@arctrix.com  Fri Mar 23 14:08:24 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 06:08:24 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>; from fdrake@acm.org on Fri, Mar 23, 2001 at 01:50:21AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com> <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>
Message-ID: <20010323060824.C28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 01:50:21AM -0500, Fred L. Drake, Jr. wrote:
> The change that's been identified as causing the problem was
> trying to remove the weak ref from the cycle detectors set of
> known containers as soon as the ref object was no longer a
> container.

I'm not sure what you mean by "no longer a container".  If the
object defines the GC type flag the GC thinks its a container.

> When this is done by the tp_clear handler may be the problem;
> the GC machinery is removing the object from the list, and
> calls gc_list_remove() assuming that the object is still in the
> list, but after the tp_clear handler has been called.

I believe your problems are deeper than this.  If
PyObject_IS_GC(op) is true and op is reachable from other objects
known to the GC then op must be in the linked list.  I haven't
tracked down all the locations in gcmodule where this assumption
is made but visit_reachable is one example.

We could remove this restriction if we were willing to accept
some slowdown.  One way would be to add the invariant
(gc_next == NULL) if the object is not in the GC list.  PyObject_Init
and gc_list_remove would have to set this pointer.  Is it worth
doing?

  Neil


From gward@python.net  Fri Mar 23 15:04:07 2001
From: gward@python.net (Greg Ward)
Date: Fri, 23 Mar 2001 10:04:07 -0500
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <15034.33486.157946.686067@mace.lucasdigital.com>; from tommy@ilm.com on Thu, Mar 22, 2001 at 03:03:29PM -0800
References: <15034.33486.157946.686067@mace.lucasdigital.com>
Message-ID: <20010323100407.A8367@cthulhu.gerg.ca>

On 22 March 2001, Flying Cougar Burnette said:
> Can anyone think of a good reason why we don't import rlcompleter
> instead of readline by default?  I can give you a good reason why it
> *should*, but I'd rather not bore anyone with the details if I don't
> have to.

Haven't tried your patch, but when you "import rlcompleter" manually in
an interactive session, that's not enough.  You also have to call

  readline.parse_and_bind("tab: complete")

*Then* <tab> does the right thing (ie. completion in the interpreter's
global namespace).  I like it, but I'll bet Guido won't because you can
always do this:

  $ cat > ~/.pythonrc
  import readline, rlcompleter
  readline.parse_and_bind("tab: complete")

and put "export PYTHONSTARTUP=~/.pythonrc" in your ~/.profile (or
whatever) to achieve the same effect.

But I think having this convenience built-in for free would be a very
nice thing.  I used Python for over a year before I found out about
PYTHONSTARTUP, and it was another year after that that I learnedabout
readline.parse_and_bind().  Why not save future newbies the bother?

        Greg
-- 
Greg Ward - Linux nerd                                  gward@python.net
http://starship.python.net/~gward/
Animals can be driven crazy by placing too many in too small a pen. 
Homo sapiens is the only animal that voluntarily does this to himself.


From fdrake@acm.org  Fri Mar 23 15:22:37 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:22:37 -0500 (EST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323100407.A8367@cthulhu.gerg.ca>
References: <15034.33486.157946.686067@mace.lucasdigital.com>
 <20010323100407.A8367@cthulhu.gerg.ca>
Message-ID: <15035.27197.714696.640238@localhost.localdomain>

Greg Ward writes:
 > But I think having this convenience built-in for free would be a very
 > nice thing.  I used Python for over a year before I found out about
 > PYTHONSTARTUP, and it was another year after that that I learnedabout
 > readline.parse_and_bind().  Why not save future newbies the bother?

  Maybe.  Or perhaps you should have looked at the tutorial?  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From jeremy@alum.mit.edu  Fri Mar 23 15:31:56 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Fri, 23 Mar 2001 10:31:56 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>

Are there any more checkins coming?

In general -- are there any checkins other than documentation and a
fix for the GC/debug/weakref problem?

Jeremy


From fdrake@acm.org  Fri Mar 23 15:35:24 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:35:24 -0500 (EST)
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <20010323060824.C28875@glacier.fnational.com>
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
 <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
 <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>
 <20010323060824.C28875@glacier.fnational.com>
Message-ID: <15035.27964.645249.362484@localhost.localdomain>

Neil Schemenauer writes:
 > I'm not sure what you mean by "no longer a container".  If the
 > object defines the GC type flag the GC thinks its a container.

  Given the assumptions you describe, removing the object from the
list isn't sufficient to not be a container.  ;-(  In which case
reverting the change (as Tim did) is probably the only way to do it.
  What I was looking for was a way to remove the weakref object from
the set of containers sooner, but appearantly that isn't possible as
long as the object's type is the only used to determine whether it is
a container.

 > I believe your problems are deeper than this.  If
 > PyObject_IS_GC(op) is true and op is reachable from other objects

  And this only considers the object's type; the object can't be
removed from the set of containers by call PyObject_GC_Fini().  (It
clearly can't while tp_clear is active for that object!)

 > known to the GC then op must be in the linked list.  I haven't
 > tracked down all the locations in gcmodule where this assumption
 > is made but visit_reachable is one example.

  So it's illegal to call PyObject_GC_Fini() anywhere but from the
destructor?  Please let me know so I can make this clear in the
documentation!

 > We could remove this restriction if we were willing to accept
 > some slowdown.  One way would be to add the invariant
 > (gc_next == NULL) if the object is not in the GC list.  PyObject_Init
 > and gc_list_remove would have to set this pointer.  Is it worth
 > doing?

  It's not at all clear that we need to remove the restriction --
documenting it would be required.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From ping@lfw.org  Fri Mar 23 15:44:54 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 07:44:54 -0800 (PST)
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Jeremy Hylton wrote:
> Are there any more checkins coming?

There are still issues in pydoc to be solved, but i think they can
be reasonably considered bugfixes rather than new features.  The
two main messy ones are getting reloading right (i am really hurting
for lack of a working find_module here!) and handling more strange
aliasing cases (HTMLgen, for example, provides many classes under
multiple names).  I hope it will be okay for me to work on these two
main fixes in the coming week.


-- ?!ng



From guido@digicool.com  Fri Mar 23 15:45:04 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 10:45:04 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: Your message of "Fri, 23 Mar 2001 10:31:56 EST."
 <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
 <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>

> Are there any more checkins coming?
> 
> In general -- are there any checkins other than documentation and a
> fix for the GC/debug/weakref problem?

I think one more from Ping, for a detail in sys.excepthook.

The GC issue is dealt with as far as I'm concerned -- any changes that
Neil suggests are too speculative to attempt this late in the game,
and Fred's patch has already been backed out by Tim.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Fri Mar 23 15:49:13 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 10:49:13 -0500
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: Your message of "Fri, 23 Mar 2001 07:44:54 PST."
 <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>
References: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>
Message-ID: <200103231549.KAA10977@cj20424-a.reston1.va.home.com>

> There are still issues in pydoc to be solved, but i think they can
> be reasonably considered bugfixes rather than new features.  The
> two main messy ones are getting reloading right (i am really hurting
> for lack of a working find_module here!) and handling more strange
> aliasing cases (HTMLgen, for example, provides many classes under
> multiple names).  I hope it will be okay for me to work on these two
> main fixes in the coming week.

This is fine after the b2 release.  I consider pydoc a "1.0" release
anyway, so it's okay if its development speed is different than that
of the rest of Python!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From nas@arctrix.com  Fri Mar 23 15:53:15 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 07:53:15 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <15035.27964.645249.362484@localhost.localdomain>; from fdrake@acm.org on Fri, Mar 23, 2001 at 10:35:24AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com> <15034.61997.299305.456415@cj42289-a.reston1.va.home.com> <20010323060824.C28875@glacier.fnational.com> <15035.27964.645249.362484@localhost.localdomain>
Message-ID: <20010323075315.A29414@glacier.fnational.com>

On Fri, Mar 23, 2001 at 10:35:24AM -0500, Fred L. Drake, Jr. wrote:
>   So it's illegal to call PyObject_GC_Fini() anywhere but from the
> destructor?  Please let me know so I can make this clear in the
> documentation!

No, its okay as long as the object is not reachable from other
objects.  When tuples are added to the tuple free-list
PyObject_GC_Fini() is called.  When they are removed
PyObject_GC_Init() is called.  This is okay because free tubles
aren't reachable from anywhere else.

> It's not at all clear that we need to remove the restriction --
> documenting it would be required.

Yah, sorry about that.  I had forgotten about that restriction.
When I saw Tim's message things started to come back to me.  I
had to study the code a bit to remember how things worked.

  Neil


From aahz@panix.com  Fri Mar 23 15:46:54 2001
From: aahz@panix.com (aahz@panix.com)
Date: Fri, 23 Mar 2001 10:46:54 -0500 (EST)
Subject: [Python-Dev] Re: Python T-shirts
References: <mailman.985019605.8781.python-list@python.org>
Message-ID: <200103231546.KAA29483@panix6.panix.com>

[posted to c.l.py with cc to python-dev]

In article <mailman.985019605.8781.python-list@python.org>,
Guido van Rossum  <guido@digicool.com> wrote:
>
>At the conference we handed out T-shirts with the slogan on the back
>"Python: programming the way Guido indented it".  We've been asked if
>there are any left.  Well, we gave them all away, but we're ordering
>more.  You can get them for $10 + S+H.  Write to Melissa Light
><melissa@digicool.com>.  Be nice to her!

If you're in the USA, S&H is $3.50, for a total cost of $13.50.  Also,
at the conference, all t-shirts were size L, but Melissa says that
she'll take size requests (since they haven't actually ordered the
t-shirts yet).
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"I won't accept a model of the universe in which free will, omniscient
gods, and atheism are simultaneously true."  -- M
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"I won't accept a model of the universe in which free will, omniscient
gods, and atheism are simultaneously true."  -- M


From nas@arctrix.com  Fri Mar 23 15:55:15 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 07:55:15 -0800
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 23, 2001 at 10:45:04AM -0500
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net> <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net> <200103231545.KAA10940@cj20424-a.reston1.va.home.com>
Message-ID: <20010323075515.B29414@glacier.fnational.com>

On Fri, Mar 23, 2001 at 10:45:04AM -0500, Guido van Rossum wrote:
> The GC issue is dealt with as far as I'm concerned -- any changes that
> Neil suggests are too speculative to attempt this late in the game,
> and Fred's patch has already been backed out by Tim.

I agree.

  Neil


From ping@lfw.org  Fri Mar 23 15:56:56 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 07:56:56 -0800 (PST)
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>
Message-ID: <Pine.LNX.4.10.10103230750340.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Ka-Ping Yee wrote:
> two main messy ones are getting reloading right (i am really hurting
> for lack of a working find_module here!)

I made an attempt at this last night but didn't finish, so reloading
isn't correct at the moment for submodules in packages.  It appears
that i'm going to have to built a few pieces of infrastructure to make
it work well: a find_module that understands packages, a sure-fire
way of distinguishing the different kinds of ImportError, and a
reliable reloader in the end.  The particular issue of incompletely-
imported modules is especially thorny, and i don't know if there's
going to be any good solution for that.

Oh, and it would be nice for the "help" object to be a little more
informative, but that could just be considered documentation; and
a test_pydoc suite would be good.


-- ?!ng



From fdrake@acm.org  Fri Mar 23 15:55:10 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:55:10 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
 <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
 <200103231545.KAA10940@cj20424-a.reston1.va.home.com>
Message-ID: <15035.29150.755915.883372@localhost.localdomain>

Guido van Rossum writes:
 > The GC issue is dealt with as far as I'm concerned -- any changes that
 > Neil suggests are too speculative to attempt this late in the game,
 > and Fred's patch has already been backed out by Tim.

  Agreed -- I don't think we need to change this further for 2.1.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From thomas@xs4all.net  Fri Mar 23 16:31:38 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 23 Mar 2001 17:31:38 +0100
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323100407.A8367@cthulhu.gerg.ca>; from gward@python.net on Fri, Mar 23, 2001 at 10:04:07AM -0500
References: <15034.33486.157946.686067@mace.lucasdigital.com> <20010323100407.A8367@cthulhu.gerg.ca>
Message-ID: <20010323173138.E13066@xs4all.nl>

On Fri, Mar 23, 2001 at 10:04:07AM -0500, Greg Ward wrote:

> But I think having this convenience built-in for free would be a very
> nice thing.  I used Python for over a year before I found out about
> PYTHONSTARTUP, and it was another year after that that I learnedabout
> readline.parse_and_bind().  Why not save future newbies the bother?

And break all those poor users who use tab in interactive mode (like *me*)
to mean tab, not 'complete me please' ? No, please don't do that :)

-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From fdrake@acm.org  Fri Mar 23 17:43:55 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 12:43:55 -0500 (EST)
Subject: [Python-Dev] Doc/ tree frozen for 2.1b2 release
Message-ID: <15035.35675.217841.967860@localhost.localdomain>

  I'm freezing the doc tree until after the 2.1b2 release is made.
Please do not make any further checkins there.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From moshez@zadka.site.co.il  Fri Mar 23 19:08:22 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Fri, 23 Mar 2001 21:08:22 +0200
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
Message-ID: <E14gWv8-0001OB-00@darjeeling>

Now that we have rich comparisons, I've suddenly realized they are
not rich enough. Consider a set type.

>>> a = set([1,2])
>>> b = set([1,3])
>>> a>b
0
>>> a<b
0
>>> max(a,b) == a
1

While I'd like

>>> max(a,b) == set([1,2,3])
>>> min(a,b) == set([1])

In current Python, there's no way to do it.
I'm still thinking about this. If it bothers anyone else, I'd
be happy to know about it.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From fdrake@localhost.localdomain  Fri Mar 23 19:11:52 2001
From: fdrake@localhost.localdomain (Fred Drake)
Date: Fri, 23 Mar 2001 14:11:52 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010323191152.3019628995@localhost.localdomain>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


Documentation for the second beta release of Python 2.1.

This includes information on future statements and lexical scoping,
and weak references.  Much of the module documentation has been
improved as well.



From guido@digicool.com  Fri Mar 23 19:20:21 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 14:20:21 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: Your message of "Fri, 23 Mar 2001 21:08:22 +0200."
 <E14gWv8-0001OB-00@darjeeling>
References: <E14gWv8-0001OB-00@darjeeling>
Message-ID: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>

> Now that we have rich comparisons, I've suddenly realized they are
> not rich enough. Consider a set type.
> 
> >>> a = set([1,2])
> >>> b = set([1,3])
> >>> a>b
> 0
> >>> a<b
> 0

I'd expect both of these to raise an exception.

> >>> max(a,b) == a
> 1
> 
> While I'd like
> 
> >>> max(a,b) == set([1,2,3])
> >>> min(a,b) == set([1])

You shouldn't call that max() or min().  These functions are supposed
to return one of their arguments (or an item from their argument
collection), not a composite.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From ping@lfw.org  Fri Mar 23 19:35:43 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 11:35:43 -0800 (PST)
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <E14gWv8-0001OB-00@darjeeling>
Message-ID: <Pine.LNX.4.10.10103231134360.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Moshe Zadka wrote:
> >>> a = set([1,2])
> >>> b = set([1,3])
[...]
> While I'd like
> 
> >>> max(a,b) == set([1,2,3])
> >>> min(a,b) == set([1])

The operation you're talking about isn't really max or min.

Why not simply write:

    >>> a | b
    [1, 2, 3]
    >>> a & b
    [1]

?


-- ?!ng



From fdrake@acm.org  Fri Mar 23 20:38:55 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 15:38:55 -0500 (EST)
Subject: [Python-Dev] Anyone using weakrefs?
Message-ID: <15035.46175.599654.851399@localhost.localdomain>

  Is anyone out there playing with the weak references support yet?
I'd *really* appreciate receiving a short snippet of non-contrived
code that makes use of weak references to use in the documentation.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From tommy@ilm.com  Fri Mar 23 21:12:49 2001
From: tommy@ilm.com (Flying Cougar Burnette)
Date: Fri, 23 Mar 2001 13:12:49 -0800 (PST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323173138.E13066@xs4all.nl>
References: <15034.33486.157946.686067@mace.lucasdigital.com>
 <20010323100407.A8367@cthulhu.gerg.ca>
 <20010323173138.E13066@xs4all.nl>
Message-ID: <15035.48030.112179.717830@mace.lucasdigital.com>

But if we just change the readline import to rlcompleter and *don't*
do the parse_and_bind trick then your TABs will not be impacted,
correct?  Will we lose anything by making this switch?



Thomas Wouters writes:
| On Fri, Mar 23, 2001 at 10:04:07AM -0500, Greg Ward wrote:
| 
| > But I think having this convenience built-in for free would be a very
| > nice thing.  I used Python for over a year before I found out about
| > PYTHONSTARTUP, and it was another year after that that I learnedabout
| > readline.parse_and_bind().  Why not save future newbies the bother?
| 
| And break all those poor users who use tab in interactive mode (like *me*)
| to mean tab, not 'complete me please' ? No, please don't do that :)
| 
| -- 
| Thomas Wouters <thomas@xs4all.net>
| 
| Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev@python.org
| http://mail.python.org/mailman/listinfo/python-dev


From moshez@zadka.site.co.il  Fri Mar 23 20:30:12 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Fri, 23 Mar 2001 22:30:12 +0200
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>
References: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>, <E14gWv8-0001OB-00@darjeeling>
Message-ID: <E14gYCK-0001VT-00@darjeeling>

On Fri, 23 Mar 2001 14:20:21 -0500, Guido van Rossum <guido@digicool.com> wrote:

> > >>> a = set([1,2])
> > >>> b = set([1,3])
> > >>> a>b
> > 0
> > >>> a<b
> > 0
> 
> I'd expect both of these to raise an exception.
 
I wouldn't. a>b means "does a contain b". It doesn't.
There *is* a partial order on sets: partial means a<b, a>b, a==b can all
be false, but that there is a meaning for all of them.

FWIW, I'd be for a partial order on complex numbers too 
(a<b iff a.real<b.real and a.imag<b.imag)

> > >>> max(a,b) == a
> > 1
> > 
> > While I'd like
> > 
> > >>> max(a,b) == set([1,2,3])
> > >>> min(a,b) == set([1])
> 
> You shouldn't call that max() or min().

I didn't. Mathematicians do.
The mathematical definition for max() I learned in Calculus 101 was
"the smallest element which is > then all arguments" (hence, properly speaking,
max should also specify the set in which it takes place. Doesn't seem to
matter in real life)

>  These functions are supposed
> to return one of their arguments

Why? 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From guido@digicool.com  Fri Mar 23 21:41:14 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 16:41:14 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: Your message of "Fri, 23 Mar 2001 22:30:12 +0200."
 <E14gYCK-0001VT-00@darjeeling>
References: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>, <E14gWv8-0001OB-00@darjeeling>
 <E14gYCK-0001VT-00@darjeeling>
Message-ID: <200103232141.QAA14771@cj20424-a.reston1.va.home.com>

> > > >>> a = set([1,2])
> > > >>> b = set([1,3])
> > > >>> a>b
> > > 0
> > > >>> a<b
> > > 0
> > 
> > I'd expect both of these to raise an exception.
>  
> I wouldn't. a>b means "does a contain b". It doesn't.
> There *is* a partial order on sets: partial means a<b, a>b, a==b can all
> be false, but that there is a meaning for all of them.

Agreed, you can define < and > any way you want on your sets.  (Why
not <= and >=?  Don't a<b suggest that b has at least one element not
in a?)

> FWIW, I'd be for a partial order on complex numbers too 
> (a<b iff a.real<b.real and a.imag<b.imag)

Where is that useful?  Are there mathematicians who define it this way?

> > > >>> max(a,b) == a
> > > 1
> > > 
> > > While I'd like
> > > 
> > > >>> max(a,b) == set([1,2,3])
> > > >>> min(a,b) == set([1])
> > 
> > You shouldn't call that max() or min().
> 
> I didn't. Mathematicians do.
> The mathematical definition for max() I learned in Calculus 101 was
> "the smallest element which is > then all arguments" (hence, properly speaking,
> max should also specify the set in which it takes place. Doesn't seem to
> matter in real life)

Sorry, mathematicians can overload stuff that you can't in Python.
Write your own operator, function or method to calculate this, just
don't call it max.  And as someone else remarked, a|b and a&b might
already fit this bill.

> >  These functions are supposed
> > to return one of their arguments
> 
> Why?

>From the docs for max:

"""
With a single argument \var{s}, return the largest item of a
non-empty sequence (e.g., a string, tuple or list).  With more than
one argument, return the largest of the arguments.
"""

It's quite clear to me from this that it returns always one of the
elements of a collection.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From tim.one@home.com  Fri Mar 23 21:47:41 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 23 Mar 2001 16:47:41 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <E14gYCK-0001VT-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>

[Moshe]
> The mathematical definition for max() I learned in Calculus 101 was
> "the smallest element which is > then all arguments"

Then I guess American and Dutch calculus are different.  Assuming you meant
to type >=, that's the definition of what we called the "least upper bound"
(or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
called "greatest lower bound" (or "glb") or "infimum".  I've never before
heard max or min used for these.  In lattices, a glb operator is often called
"meet" and a lub operator "join", but again I don't think I've ever seen them
called max or min.

[Guido]
>>  These functions are supposed to return one of their arguments

[Moshe]
> Why?

Because Guido said so <wink>.  Besides, it's apparently the only meaning he
ever heard of; me too.



From esr@thyrsus.com  Fri Mar 23 22:08:52 2001
From: esr@thyrsus.com (Eric S. Raymond)
Date: Fri, 23 Mar 2001 17:08:52 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 04:47:41PM -0500
References: <E14gYCK-0001VT-00@darjeeling> <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>
Message-ID: <20010323170851.A2802@thyrsus.com>

Tim Peters <tim.one@home.com>:
> [Moshe]
> > The mathematical definition for max() I learned in Calculus 101 was
> > "the smallest element which is > then all arguments"
> 
> Then I guess American and Dutch calculus are different.  Assuming you meant
> to type >=, that's the definition of what we called the "least upper bound"
> (or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
> called "greatest lower bound" (or "glb") or "infimum".  I've never before
> heard max or min used for these.  In lattices, a glb operator is often called
> "meet" and a lub operator "join", but again I don't think I've ever seen them
> called max or min.

Eric, speaking as a defrocked mathematician who was at one time rather
intimate with lattice theory, concurs.  However, Tim, I suspect you
will shortly discover that Moshe ain't Dutch.  I didn't ask and I
could be wrong, but at PC9 Moshe's accent and body language fairly
shouted "Israeli" at me.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

[President Clinton] boasts about 186,000 people denied firearms under
the Brady Law rules.  The Brady Law has been in force for three years.  In
that time, they have prosecuted seven people and put three of them in
prison.  You know, the President has entertained more felons than that at
fundraising coffees in the White House, for Pete's sake."
	-- Charlton Heston, FOX News Sunday, 18 May 1997


From tim.one@home.com  Fri Mar 23 22:11:50 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 23 Mar 2001 17:11:50 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <20010323170851.A2802@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPEJHAA.tim.one@home.com>

> Eric, speaking as a defrocked mathematician who was at one time rather
> intimate with lattice theory, concurs.  However, Tim, I suspect you
> will shortly discover that Moshe ain't Dutch.  I didn't ask and I
> could be wrong, but at PC9 Moshe's accent and body language fairly
> shouted "Israeli" at me.

Well, applying Moshe's theory of max to my message, you should have released
that Israeli = max{American, Dutch}.  That is

    Then I guess American and Dutch calculus are different.

was missing

    (from Israeli calculus)

As you'll shortly discover from his temper when his perfidious schemes are
frustrated, Guido is the Dutch guy in this debate <wink>.

although-i-prefer-to-be-thought-of-as-plutonian-ly y'rs  - tim



From guido@digicool.com  Fri Mar 23 22:29:02 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 17:29:02 -0500
Subject: [Python-Dev] Python 2.1b2 released
Message-ID: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>

On time, and with a minimum of fuss, we've released Python 2.1b2.
Thanks again to the many developers who contributed!

Check it out on the Python website:

    http://www.python.org/2.1/

or on SourceForge:

    http://sourceforge.net/project/showfiles.php?group_id=5470&release_id=28334

As it behooves a second beta release, there's no really big news since
2.1b1 was released on March 2:

- Bugs fixed and documentation added. There's now an appendix of the
  Reference Manual documenting nested scopes:

    http://python.sourceforge.net/devel-docs/ref/futures.html

- When nested scopes are enabled by "from __future__ import
  nested_scopes", this also applies to exec, eval() and execfile(),
  and into the interactive interpreter (when using -i).

- Assignment to the internal global variable __debug__ is now illegal.

- unittest.py, a unit testing framework by Steve Purcell (PyUNIT,
  inspired by JUnit), is now part of the standard library.  See the
  PyUnit webpage for documentation:

    http://pyunit.sourceforge.net/

Andrew Kuchling has written (and is continuously updating) an
extensive overview: What's New in Python 2.1:

    http://www.amk.ca/python/2.1/

See also the Release notes posted on SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=28334

We are planning to make the final release of Python 2.1 on April 13;
we may release a release candidate a week earlier.

We're also planning a bugfix release for Python 2.0, dubbed 2.0.1; we
don't have a release schedule for this yet.  We could use a volunteer
to act as the bug release manager!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From paulp@ActiveState.com  Fri Mar 23 23:54:19 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 15:54:19 -0800
Subject: [Python-Dev] [Fwd: Python 2.1b2 released]
Message-ID: <3ABBE22B.DBAE4552@ActiveState.com>


-------- Original Message --------
Subject: Python 2.1b2 released
Date: Fri, 23 Mar 2001 17:29:02 -0500
From: Guido van Rossum <guido@digicool.com>
To: python-dev@python.org, Python mailing list
<python-list@python.org>,python-announce@python.org

On time, and with a minimum of fuss, we've released Python 2.1b2.
Thanks again to the many developers who contributed!

Check it out on the Python website:

    http://www.python.org/2.1/

or on SourceForge:

   
http://sourceforge.net/project/showfiles.php?group_id=5470&release_id=28334

As it behooves a second beta release, there's no really big news since
2.1b1 was released on March 2:

- Bugs fixed and documentation added. There's now an appendix of the
  Reference Manual documenting nested scopes:

    http://python.sourceforge.net/devel-docs/ref/futures.html

- When nested scopes are enabled by "from __future__ import
  nested_scopes", this also applies to exec, eval() and execfile(),
  and into the interactive interpreter (when using -i).

- Assignment to the internal global variable __debug__ is now illegal.

- unittest.py, a unit testing framework by Steve Purcell (PyUNIT,
  inspired by JUnit), is now part of the standard library.  See the
  PyUnit webpage for documentation:

    http://pyunit.sourceforge.net/

Andrew Kuchling has written (and is continuously updating) an
extensive overview: What's New in Python 2.1:

    http://www.amk.ca/python/2.1/

See also the Release notes posted on SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=28334

We are planning to make the final release of Python 2.1 on April 13;
we may release a release candidate a week earlier.

We're also planning a bugfix release for Python 2.0, dubbed 2.0.1; we
don't have a release schedule for this yet.  We could use a volunteer
to act as the bug release manager!

--Guido van Rossum (home page: http://www.python.org/~guido/)

-- 
http://mail.python.org/mailman/listinfo/python-list


From paulp@ActiveState.com  Sat Mar 24 00:15:30 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 16:15:30 -0800
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich"
 Comparisons?
References: <E14gYCK-0001VT-00@darjeeling> <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com> <20010323170851.A2802@thyrsus.com>
Message-ID: <3ABBE722.B29684A1@ActiveState.com>

"Eric S. Raymond" wrote:
> 
>...
> 
> Eric, speaking as a defrocked mathematician who was at one time rather
> intimate with lattice theory, concurs.  However, Tim, I suspect you
> will shortly discover that Moshe ain't Dutch.  I didn't ask and I
> could be wrong, but at PC9 Moshe's accent and body language fairly
> shouted "Israeli" at me.

Not to mention his top-level-domain. Sorry, I couldn't resist.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From paulp@ActiveState.com  Sat Mar 24 00:21:10 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 16:21:10 -0800
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich"
 Comparisons?
References: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>
Message-ID: <3ABBE876.8EC91425@ActiveState.com>

Tim Peters wrote:
> 
> [Moshe]
> > The mathematical definition for max() I learned in Calculus 101 was
> > "the smallest element which is > then all arguments"
> 
> Then I guess American and Dutch calculus are different.  Assuming you meant
> to type >=, that's the definition of what we called the "least upper bound"
> (or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
> called "greatest lower bound" (or "glb") or "infimum".  

As long as we're shooting the shit on a Friday afternoon...

http://www.emba.uvm.edu/~read/TI86/maxmin.html
http://www.math.com/tables/derivatives/extrema.htm

Look at that domain name. Are you going to argue with that??? A
corporation dedicated to mathematics?

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From paulp@ActiveState.com  Sat Mar 24 01:16:03 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 17:16:03 -0800
Subject: [Python-Dev] Making types behave like classes
Message-ID: <3ABBF553.274D535@ActiveState.com>

These are some half-baked ideas about getting classes and types to look
more similar. I would like to know whether they are workable or not and
so I present them to the people best equipped to tell me.

Many extension types have a __getattr__ that looks like this:

static PyObject *
Xxo_getattr(XxoObject *self, char *name)
{
	// try to do some work with known attribute names, else:

	return Py_FindMethod(Xxo_methods, (PyObject *)self, name);
}

Py_FindMethod can (despite its name) return any Python object, including
ordinary (non-function) attributes. It also has complete access to the
object's state and type through the self parameter. Here's what we do
today for __doc__:

		if (strcmp(name, "__doc__") == 0) {
			char *doc = self->ob_type->tp_doc;
			if (doc != NULL)
				return PyString_FromString(doc);
		}

Why can't we do this for all magic methods? 

	* __class__ would return for the type object
	* __add__,__len__, __call__, ... would return a method wrapper around
the appropriate slot, 	
	* __init__ might map to a no-op

I think that Py_FindMethod could even implement inheritance between
types if we wanted.

We already do this magic for __methods__ and __doc__. Why not for all of
the magic methods?

Many other types implement no getattr at all (the slot is NULL). In that
case, I think that we have carte blanche to define their getattr
behavior as instance-like as possible.

Finally there are the types with getattrs that do not dispatch to
Py_FindMethod. we can just change those over manually. Extension authors
will do the same when they realize that their types are not inheriting
the features that the other types are.

Benefits:

	* objects based on extension types would "look more like" classes to
Python programmers so there is less confusion about how they are
different

	* users could stop using the type() function to get concrete types and
instead use __class__. After a version or two, type() could be formally
deprecated in favor of isinstance and __class__.

	* we will have started some momentum towards type/class unification
which we could continue on into __setattr__ and subclassing.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From jafo@tummy.com  Sat Mar 24 06:50:08 2001
From: jafo@tummy.com (Sean Reifschneider)
Date: Fri, 23 Mar 2001 23:50:08 -0700
Subject: [Python-Dev] Python 2.1b2 SRPM (was: Re: Python 2.1b2 released)
In-Reply-To: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 23, 2001 at 05:29:02PM -0500
References: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>
Message-ID: <20010323235008.A30668@tummy.com>

Shy of RPMs because of library or other dependancy problems with most of
the RPMs you pick up?  The cure, in my experience is to pick up an SRPM.
All you need to do to build a binary package tailored to your system is run
"rpm --rebuild <packagename>.src.rpm".

I've just put up an SRPM of the 2.1b2 release at:

   ftp://ftp.tummy.com/pub/tummy/RPMS/SRPMS/

Again, this one builds the executable as "python2.1", and can be installed
along-side your normal Python on the system.  Want to check out a great new
feature?  Type "python2.1 /usr/bin/pydoc string".

Download the SRPM from above, and most users can install a binary built
against exactly the set of packages on their system by doing:

   rpm --rebuild python-2.1b2-1tummy.src.rpm
   rpm -i /usr/src/redhat/RPMS/i386/python*2.1b1-1tummy.i386.rpm

Note that this release enables "--with-pymalloc".  If you experience
problems with modules you use, please report the module and how it can be
reproduced so that these issues can be taken care of.

Enjoy,
Sean
-- 
 Total strangers need love, too; and I'm stranger than most.
Sean Reifschneider, Inimitably Superfluous <jafo@tummy.com>
tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python


From moshez@zadka.site.co.il  Sat Mar 24 06:53:03 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 08:53:03 +0200
Subject: [Python-Dev] test_minidom crash
Message-ID: <E14ghv5-0003fu-00@darjeeling>

The bug is in Lib/xml/__init__.py

__version__ = "1.9".split()[1]

I don't know what it was supposed to be, but .split() without an
argument splits on whitespace. best guess is "1.9".split('.') ??

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From moshez@zadka.site.co.il  Sat Mar 24 07:30:47 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 09:30:47 +0200
Subject: [Python-Dev] Py2.1b2/bsddb build problems
Message-ID: <E14giVb-00051a-00@darjeeling>

setup.py needs the following lines:

        if self.compiler.find_library_file(lib_dirs, 'db1'):
            dblib = ['db1']

(right after 

        if self.compiler.find_library_file(lib_dirs, 'db'):
            dblib = ['db'])

To creat bsddb correctly on my system (otherwise it gets installed
but cannot be imported)

I'm using Debian sid 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From tim.one@home.com  Sat Mar 24 07:52:28 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 24 Mar 2001 02:52:28 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14ghv5-0003fu-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>

[Moshe Zadka]
> The bug is in Lib/xml/__init__.py
>
> __version__ = "1.9".split()[1]

Believe me, we would not have shipped 2.1b2 if it failed any of the std tests
(and I ran the whole suite 8 ways:  with and without nuking all .pyc/.pyo
files first, with and without -O, and under release and debug builds).

> I don't know what it was supposed to be, but .split() without an
> argument splits on whitespace. best guess is "1.9".split('.') ??

On my box that line is:

__version__ = "$Revision: 1.9 $".split()[1]

So this is this some CVS retrieval screwup?



From moshez@zadka.site.co.il  Sat Mar 24 08:01:44 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 10:01:44 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>
Message-ID: <E14gizY-0005B1-00@darjeeling>

On Sat, 24 Mar 2001 02:52:28 -0500, "Tim Peters" <tim.one@home.com> wrote:
 
> Believe me, we would not have shipped 2.1b2 if it failed any of the std tests
> (and I ran the whole suite 8 ways:  with and without nuking all .pyc/.pyo
> files first, with and without -O, and under release and debug builds).
> 
> > I don't know what it was supposed to be, but .split() without an
> > argument splits on whitespace. best guess is "1.9".split('.') ??
> 
> On my box that line is:
> 
> __version__ = "$Revision: 1.9 $".split()[1]
> 
> So this is this some CVS retrieval screwup?

Probably.
But nobody cares about your machine <1.9 wink>
In the Py2.1b2 you shipped, the line says
'''
__version__ = "1.9".split()[1]
'''
It's line 18.
That, or someone managed to crack one of the routers from SF to me.

should-we-start-signing-our-releases-ly y'rs, Z. 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From tim.one@home.com  Sat Mar 24 08:19:20 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 24 Mar 2001 03:19:20 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14gizY-0005B1-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAIJIAA.tim.one@home.com>

> Probably.
> But nobody cares about your machine <1.9 wink>
> In the Py2.1b2 you shipped, the line says
> '''
> __version__ = "1.9".split()[1]
> '''
> It's line 18.

No, in the 2.1b2 I installed on my machine, from the installer I sucked down
from SourceForge, the line is what I said it was:

__version__ = "$Revision: 1.9 $".split()[1]

So you're talking about something else, but I don't know what ...

Ah, OK!  It's that silly source tarball, Python-2.1b2.tgz.  I just sucked
that down from SF, and *that* does have the damaged line just as you say (in
Lib/xml/__init__.py).

I guess we're going to have to wait for Guido to wake up and explain how this
got hosed ... in the meantime, switch to Windows and use a real installer
<wink>.



From martin@loewis.home.cs.tu-berlin.de  Sat Mar 24 08:19:44 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 09:19:44 +0100
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
Message-ID: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>

>> The mathematical definition for max() I learned in Calculus 101 was
>> "the smallest element which is > then all arguments"
>
>Then I guess American and Dutch calculus are different.
[from Israeli calculus]

The missing bit linking the two (sup and max) is

"The supremum of S is equal to its maximum if S possesses a greatest
member."
[http://www.cenius.fsnet.co.uk/refer/maths/articles/s/supremum.html]

So given a subset of a lattice, it may not have a maximum, but it will
always have a supremum. It appears that the Python max function
differs from the mathematical maximum in that respect: max will return
a value, even if that is not the "largest value"; the mathematical
maximum might give no value.

Regards,
Martin



From moshez@zadka.site.co.il  Sat Mar 24 09:13:46 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 11:13:46 +0200
Subject: [Python-Dev] setup.py is too aggressive
Message-ID: <E14gk7G-0005Wh-00@darjeeling>

It seems to me setup.py tries to build libraries even when it's impossible
E.g., I had to add the patch attached so I will get no more ImportErrors
where the module shouts at me that it could not find a symbol.

*** Python-2.1b2/setup.py	Wed Mar 21 09:44:53 2001
--- Python-2.1b2-changed/setup.py	Sat Mar 24 10:49:20 2001
***************
*** 326,331 ****
--- 326,334 ----
              if (self.compiler.find_library_file(lib_dirs, 'ndbm')):
                  exts.append( Extension('dbm', ['dbmmodule.c'],
                                         libraries = ['ndbm'] ) )
+             elif (self.compiler.find_library_file(lib_dirs, 'db1')):
+                 exts.append( Extension('dbm', ['dbmmodule.c'],
+                                        libraries = ['db1'] ) )
              else:
                  exts.append( Extension('dbm', ['dbmmodule.c']) )
  
***************
*** 348,353 ****
--- 351,358 ----
          dblib = []
          if self.compiler.find_library_file(lib_dirs, 'db'):
              dblib = ['db']
+         if self.compiler.find_library_file(lib_dirs, 'db1'):
+             dblib = ['db1']
          
          db185_incs = find_file('db_185.h', inc_dirs,
                                 ['/usr/include/db3', '/usr/include/db2'])

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From tim.one@home.com  Sat Mar 24 10:19:15 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 24 Mar 2001 05:19:15 -0500
Subject: [Python-Dev] RE: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEAMJIAA.tim.one@home.com>

[Martin v. Loewis]
> The missing bit linking the two (sup and max) is
>
> "The supremum of S is equal to its maximum if S possesses a greatest
> member."
> [http://www.cenius.fsnet.co.uk/refer/maths/articles/s/supremum.html]
>
> So given a subset of a lattice, it may not have a maximum, but it will
> always have a supremum. It appears that the Python max function
> differs from the mathematical maximum in that respect: max will return
> a value, even if that is not the "largest value"; the mathematical
> maximum might give no value.

Note that the definition of supremum given on that page can't be satisfied in
general for lattices.  For example "x divides y" induces a lattice, where gcd
is the glb and lcm (least common multiple) the lub.  The set {6, 15} then has
lub 30, but is not a supremum under the 2nd clause of that page because 10
divides 30 but neither of {6, 15} (so there's an element "less than" (== that
divides) 30 which no element in the set is "larger than").

So that defn. is suitable for real analysis, but the more general defn. of
sup(S) is simply that X = sup(S) iff X is an upper bound for S (same as the
1st clause on the referenced page), and that every upper bound Y of S is >=
X.  That works for lattices too.

Since Python's max works on sequences, and never terminates given an infinite
sequence, it only makes *sense* to ask what max(S) returns for finite
sequences S.  Under a total ordering, every finite set S has a maximal
element (an element X of S such that for all Y in S Y <= X), and Python's
max(S) does return one.  If there's only a partial ordering, Python's max()
is unpredictable (may or may not blow up, depending on the order the elements
are listed; e.g., [a, b, c] where a<b and c<b but a and c aren't comparable:
in that order, max returns b, but if given in order [a, c, b] max blows up).

Since this is all obvious to the most casual observer <0.9 wink>, it remains
unclear what the brouhaha is about.



From loewis@informatik.hu-berlin.de  Sat Mar 24 12:02:53 2001
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 24 Mar 2001 13:02:53 +0100 (MET)
Subject: [Python-Dev] setup.py is too aggressive
Message-ID: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>

> It seems to me setup.py tries to build libraries even when it's
> impossible E.g., I had to add the patch attached so I will get no
> more ImportErrors where the module shouts at me that it could not
> find a symbol.

The more general problem here is that building of a module may fail:
Even if a library is detected correctly, it might be that additional
libraries are needed. In some cases, it helps to put the correct
module line into Modules/Setup (which would have helped in your case);
then setup.py will not attempt to build the module.

However, there may be cases where a module cannot be build at all:
either some libraries are missing, or the module won't work on the
system for some other reason (e.g. since the system library it relies
on has some bug).

There should be a mechanism to tell setup.py not to build a module at
all. Since it is looking into Modules/Setup anyway, perhaps a

*excluded*
dbm

syntax in Modules/Setup would be appropriate? Of course, makesetup
needs to be taught such a syntax. Alternatively, an additional
configuration file or command line options might work.

In any case, distributors are certainly advised to run the testsuite
and potentially remove or fix modules for which the tests fail.

Regards,
Martin


From moshez@zadka.site.co.il  Sat Mar 24 12:09:04 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 14:09:04 +0200
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
Message-ID: <E14gmqu-0006Ex-00@darjeeling>

On Sat, 24 Mar 2001, Martin von Loewis <loewis@informatik.hu-berlin.de> wrote:

> In any case, distributors are certainly advised to run the testsuite
> and potentially remove or fix modules for which the tests fail.

These, however, aren't flagged as failures -- they're flagged as
ImportErrors which are ignored during tests
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From loewis@informatik.hu-berlin.de  Sat Mar 24 12:23:47 2001
From: loewis@informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 24 Mar 2001 13:23:47 +0100 (MET)
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <E14gmqu-0006Ex-00@darjeeling> (message from Moshe Zadka on Sat,
 24 Mar 2001 14:09:04 +0200)
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de> <E14gmqu-0006Ex-00@darjeeling>
Message-ID: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>

> > In any case, distributors are certainly advised to run the testsuite
> > and potentially remove or fix modules for which the tests fail.
> 
> These, however, aren't flagged as failures -- they're flagged as
> ImportErrors which are ignored during tests

I see. Is it safe to say, for all modules in the core, that importing
them has no "dangerous" side effect? In that case, setup.py could
attempt to import them after they've been build, and delete the ones
that fail to import. Of course, that would also delete modules where
setting LD_LIBRARY_PATH might cure the problem...

Regards,
Martin


From moshez@zadka.site.co.il  Sat Mar 24 12:24:48 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 14:24:48 +0200
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>
References: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>, <200103241202.NAA19000@pandora.informatik.hu-berlin.de> <E14gmqu-0006Ex-00@darjeeling>
Message-ID: <E14gn68-0006Jk-00@darjeeling>

On Sat, 24 Mar 2001, Martin von Loewis <loewis@informatik.hu-berlin.de> wrote:

> I see. Is it safe to say, for all modules in the core, that importing
> them has no "dangerous" side effect? In that case, setup.py could
> attempt to import them after they've been build, and delete the ones
> that fail to import. Of course, that would also delete modules where
> setting LD_LIBRARY_PATH might cure the problem...

So people who build will have to set LD_LIB_PATH too. I don't see a problem
with that...
(particularily since this will mean only that if the tests pass, only modules
which were tested will be installed, theoretically...)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From guido@digicool.com  Sat Mar 24 13:10:21 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 08:10:21 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 08:53:03 +0200."
 <E14ghv5-0003fu-00@darjeeling>
References: <E14ghv5-0003fu-00@darjeeling>
Message-ID: <200103241310.IAA21370@cj20424-a.reston1.va.home.com>

> The bug is in Lib/xml/__init__.py
> 
> __version__ = "1.9".split()[1]
> 
> I don't know what it was supposed to be, but .split() without an
> argument splits on whitespace. best guess is "1.9".split('.') ??

This must be because I used "cvs export -kv" to create the tarball
this time.  This may warrant a release update :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)


From ping@lfw.org  Sat Mar 24 13:33:05 2001
From: ping@lfw.org (Ka-Ping Yee)
Date: Sat, 24 Mar 2001 05:33:05 -0800 (PST)
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>
Message-ID: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>

On Sat, 24 Mar 2001, Martin v. Loewis wrote:
> So given a subset of a lattice, it may not have a maximum, but it will
> always have a supremum. It appears that the Python max function
> differs from the mathematical maximum in that respect: max will return
> a value, even if that is not the "largest value"; the mathematical
> maximum might give no value.

Ah, but in Python most collections are usually finite. :)


-- ?!ng



From guido@digicool.com  Sat Mar 24 13:33:59 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 08:33:59 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 08:53:03 +0200."
 <E14ghv5-0003fu-00@darjeeling>
References: <E14ghv5-0003fu-00@darjeeling>
Message-ID: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>

OK, here's what I've done.  I've done a new cvs export of the r21b2
tag, this time *without* specifying -kv.  I've tarred it up and
uploaded it to SF and python.org.  The new tarball is called
Python-2.1b2a.tgz to distinguish it from the broken one.  I've removed
the old, broken tarball, and added a note to the python.org/2.1/ page
about the new tarball.

Background:

"cvs export -kv" changes all CVS version insertions from "$Release:
1.9$" to "1.9".  (It affects other CVS inserts too.)  This is so that
the versions don't get changed when someone else incorporates it into
their own CVS tree, which used to be a common usage pattern.

The question is, should we bother to make the code robust under
releases with -kv or not?  I used to write code that dealt with the
fact that __version__ could be either "$Release: 1.9$" or "1.9", but
clearly that bit of arcane knowledge got lost.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From gmcm@hypernet.com  Sat Mar 24 13:46:33 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Sat, 24 Mar 2001 08:46:33 -0500
Subject: [Python-Dev] Making types behave like classes
In-Reply-To: <3ABBF553.274D535@ActiveState.com>
Message-ID: <3ABC5EE9.2943.14C818C7@localhost>

[Paul Prescod]
> These are some half-baked ideas about getting classes and types
> to look more similar. I would like to know whether they are
> workable or not and so I present them to the people best equipped
> to tell me.

[expand Py_FindMethod's actions]

>  * __class__ would return for the type object
>  * __add__,__len__, __call__, ... would return a method wrapper
>  around
> the appropriate slot, 	
>  * __init__ might map to a no-op
> 
> I think that Py_FindMethod could even implement inheritance
> between types if we wanted.
> 
> We already do this magic for __methods__ and __doc__. Why not for
> all of the magic methods?

Those are introspective; typically read in the interactive 
interpreter. I can't do anything with them except read them.

If you wrap, eg, __len__, what can I do with it except call it? I 
can already do that with len().

> Benefits:
> 
>  * objects based on extension types would "look more like"
>  classes to
> Python programmers so there is less confusion about how they are
> different

I think it would probably enhance confusion to have the "look 
more like" without "being more like".
 
>  * users could stop using the type() function to get concrete
>  types and
> instead use __class__. After a version or two, type() could be
> formally deprecated in favor of isinstance and __class__.

__class__ is a callable object. It has a __name__. From the 
Python side, a type isn't much more than an address. Until 
Python's object model is redone, there are certain objects for 
which type(o) and o.__class__ return quite different things.
 
>  * we will have started some momentum towards type/class
>  unification
> which we could continue on into __setattr__ and subclassing.

The major lesson I draw from ExtensionClass and friends is 
that achieving this behavior in today's Python is horrendously 
complex and fragile. Until we can do it right, I'd rather keep it 
simple (and keep the warts on the surface).

- Gordon


From moshez@zadka.site.co.il  Sat Mar 24 13:45:32 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 15:45:32 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>
References: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>
Message-ID: <E14goMG-0006bL-00@darjeeling>

On Sat, 24 Mar 2001 08:33:59 -0500, Guido van Rossum <guido@digicool.com> wrote:

> OK, here's what I've done.  I've done a new cvs export of the r21b2
> tag, this time *without* specifying -kv.

This was clearly the solution to *this* problem ;-)
"No code changes in CVS between the same release" sounds like a good
rule.

> The question is, should we bother to make the code robust under
> releases with -kv or not?

Yes.
People *will* be incorporating Python into their own CVS trees. FreeBSD
does it with ports, and Debian are thinking of moving in this direction,
and some Debian maintainers already do that with upstream packages --
Python might be handled like that too.

The only problem I see if that we need to run the test-suite with a -kv'less
export. Fine, this should be part of the release procedure. 
I just went through the core grepping for '$Revision' and it seems this
is the only place this happens -- all the other places either put the default
version (RCS cruft and all), or are smart about handling it.

Since "smart" means just
__version__ = [part for part in "$Revision$".split() if '$' not in part][0]
We can just mandate that, and be safe.

However, whatever we do the Windows build and the UNIX build must be the
same.
I think it should be possible to build the Windows version from the .tgz
and that is what (IMHO) should happen, instead of Tim and Guido exporting
from the CVS independantly. This would stop problems like the one
Tim and I had this (my time) morning.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From guido@digicool.com  Sat Mar 24 15:34:13 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 10:34:13 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 15:45:32 +0200."
 <E14goMG-0006bL-00@darjeeling>
References: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>
 <E14goMG-0006bL-00@darjeeling>
Message-ID: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>

> People *will* be incorporating Python into their own CVS trees. FreeBSD
> does it with ports, and Debian are thinking of moving in this direction,
> and some Debian maintainers already do that with upstream packages --
> Python might be handled like that too.

I haven't seen *any* complaints about this, so is it possible that
they don't mind having the $Revision: ... $ strings in there?

> The only problem I see if that we need to run the test-suite with a
> -kv'less export.  Fine, this should be part of the release
> procedure.  I just went through the core grepping for '$Revision'
> and it seems this is the only place this happens -- all the other
> places either put the default version (RCS cruft and all), or are
> smart about handling it.

Hm.  This means that the -kv version gets *much* less testing than the
regular checkout version.  I've done this before in the past with
other projects and I remember that the bugs produced by this kind of
error are very subtle and not always caught by the test suite.

So I'm skeptical.

> Since "smart" means just
> __version__ = [part for part in "$Revision$".split() if '$' not in part][0]
> We can just mandate that, and be safe.

This is less typing, and no more obscure, and seems to work just as
well given that the only two inputs are "$Revision: 1.9 $" or "1.9":

    __version__ = "$Revision: 1.9 $".split()[-2:][0]

> However, whatever we do the Windows build and the UNIX build must be the
> same.

That's hard right there -- we currently build the Windows compiler
right out of the CVS tree.

> I think it should be possible to build the Windows version from the .tgz
> and that is what (IMHO) should happen, instead of Tim and Guido exporting
> from the CVS independantly. This would stop problems like the one
> Tim and I had this (my time) morning.

Who are you telling us how to work?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From moshez@zadka.site.co.il  Sat Mar 24 15:41:10 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 17:41:10 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>
References: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>, <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>
 <E14goMG-0006bL-00@darjeeling>
Message-ID: <E14gqAA-0006uP-00@darjeeling>

On Sat, 24 Mar 2001 10:34:13 -0500, Guido van Rossum <guido@digicool.com> wrote:

> I haven't seen *any* complaints about this, so is it possible that
> they don't mind having the $Revision: ... $ strings in there?

I don't know.
Like I said, my feelings about that are not very strong...

> > I think it should be possible to build the Windows version from the .tgz
> > and that is what (IMHO) should happen, instead of Tim and Guido exporting
> > from the CVS independantly. This would stop problems like the one
> > Tim and I had this (my time) morning.
> 
> Who are you telling us how to work?

I said "I think" and "IMHO", so I'm covered. I was only giving suggestions.
You're free to ignore them if you think my opinion is without merit.
I happen to think otherwise <8am wink>, but you're the BDFL and I'm not.
Are you saying it's not important to you that the .py's in Windows and
UNIX are the same?
I think it should be a priority, given that when people complain about
OS-independant problems, they often neglect to mention the OS.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From martin@loewis.home.cs.tu-berlin.de  Sat Mar 24 16:49:10 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 17:49:10 +0100
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>
 (message from Ka-Ping Yee on Sat, 24 Mar 2001 05:33:05 -0800 (PST))
References: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>
Message-ID: <200103241649.f2OGnAa04582@mira.informatik.hu-berlin.de>

> On Sat, 24 Mar 2001, Martin v. Loewis wrote:
> > So given a subset of a lattice, it may not have a maximum, but it will
> > always have a supremum. It appears that the Python max function
> > differs from the mathematical maximum in that respect: max will return
> > a value, even if that is not the "largest value"; the mathematical
> > maximum might give no value.
> 
> Ah, but in Python most collections are usually finite. :)

Even  a  finite collection  may  not  have  a maximum,  which  Moshe's
original example illustrates:

s1 = set(1,4,5)
s2 = set(4,5,6)

max([s1,s2]) == ???

With respect to the subset relation, the collection [s1,s2] has no
maximum; its supremum is set(1,4,5,6). A maximum is only guaranteed to
exist for a finite collection if the order is total.

Regards,
Martin


From barry@digicool.com  Sat Mar 24 17:19:20 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 12:19:20 -0500
Subject: [Python-Dev] test_minidom crash
References: <E14ghv5-0003fu-00@darjeeling>
 <200103241310.IAA21370@cj20424-a.reston1.va.home.com>
Message-ID: <15036.55064.497185.806163@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

    >> The bug is in Lib/xml/__init__.py __version__ =
    >> "1.9".split()[1] I don't know what it was supposed to be, but
    >> .split() without an argument splits on whitespace. best guess
    >> is "1.9".split('.') ??

    GvR> This must be because I used "cvs export -kv" to create the
    GvR> tarball this time.  This may warrant a release update :-(

Using "cvs export -kv" is a Good Idea for a release!  That's because
if others import the release into their own CVS, or pull the file into
an unrelated CVS repository, your revision numbers are preserved.

I haven't followed this thread very carefully, but isn't there a
better way to fix the problem rather than stop using -kv (I'm not sure
that's what Guido has in mind)?

-Barry


From martin@loewis.home.cs.tu-berlin.de  Sat Mar 24 17:30:46 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 18:30:46 +0100
Subject: [Python-Dev] test_minidom crash
Message-ID: <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de>

[Moshe]
> I just went through the core grepping for '$Revision' and it seems
> this is the only place this happens -- all the other places either
> put the default version (RCS cruft and all), or are smart about
> handling it.

You have not search carefully enough. pyexpat.c has

    char *rev = "$Revision: 2.44 $";
...
    PyModule_AddObject(m, "__version__",
                       PyString_FromStringAndSize(rev+11, strlen(rev+11)-2));

> I haven't seen *any* complaints about this, so is it possible that
> they don't mind having the $Revision: ... $ strings in there?

The problem is that they don't know the problems they run into
(yet). E.g. if they import pyexpat.c into their tree, they get
1.1.1.1; even after later imports, they still get 1.x. Now, PyXML
currently decides that the Python pyexpat is not good enough if it is
older than 2.39. In turn, they might get different code being used
when installing out of their CVS as compared to installing from the
source distributions.

That all shouldn't cause problems, but it would probably help if
source releases continue to use -kv; then likely every end-user will
get the same sources. I'd volunteer to review the core sources (and
produce patches) if that is desired.

Regards,
Martin


From barry@digicool.com  Sat Mar 24 17:33:47 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 12:33:47 -0500
Subject: [Python-Dev] test_minidom crash
References: <E14ghv5-0003fu-00@darjeeling>
 <200103241333.IAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <15036.55931.367420.983599@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

    GvR> The question is, should we bother to make the code robust
    GvR> under releases with -kv or not?

Yes.
    
    GvR> I used to write code that dealt with the fact that
    GvR> __version__ could be either "$Release: 1.9$" or "1.9", but
    GvR> clearly that bit of arcane knowledge got lost.

Time to re-educate then!

On the one hand, I personally try to avoid assigning __version__ from
a CVS revision number because I'm usually interested in a more
confederated release.  I.e. mimelib 0.2 as opposed to
mimelib/mimelib/__init__.py revision 1.4.  If you want the CVS
revision of the file to be visible in the file, use a different global
variable, or stick it in a comment and don't worry about sucking out
just the numbers.

OTOH, I understand this is a convenient way to not have to munge
version numbers so lots of people do it (I guess).

Oh, I see there are other followups to this thread, so I'll shut up
now.  I think Guido's split() idiom is the Right Thing To Do; it works
with branch CVS numbers too:

>>> "$Revision: 1.9.4.2 $".split()[-2:][0]
'1.9.4.2'
>>> "1.9.4.2".split()[-2:][0]
'1.9.4.2'

-Barry


From guido@digicool.com  Sat Mar 24 18:13:45 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 13:13:45 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 12:19:20 EST."
 <15036.55064.497185.806163@anthem.wooz.org>
References: <E14ghv5-0003fu-00@darjeeling> <200103241310.IAA21370@cj20424-a.reston1.va.home.com>
 <15036.55064.497185.806163@anthem.wooz.org>
Message-ID: <200103241813.NAA27426@cj20424-a.reston1.va.home.com>

> Using "cvs export -kv" is a Good Idea for a release!  That's because
> if others import the release into their own CVS, or pull the file into
> an unrelated CVS repository, your revision numbers are preserved.

I know, but I doubt that htis is used much any more.  I haven't had
any complaints about this, and I know that we didn't use -kv for
previous releases (I checked 1.5.2, 1.6 and 2.0).

> I haven't followed this thread very carefully, but isn't there a
> better way to fix the problem rather than stop using -kv (I'm not sure
> that's what Guido has in mind)?

Well, if we only us -kv to create the final tarball and installer, and
everybody else uses just the CVS version, the problem is that we don't
have enough testing time in.

Given that most code is written to deal with "$Revision: 1.9 $", why
bother breaking it?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Sat Mar 24 18:14:51 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 13:14:51 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 18:30:46 +0100."
 <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de>
References: <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de>
Message-ID: <200103241814.NAA27441@cj20424-a.reston1.va.home.com>

> That all shouldn't cause problems, but it would probably help if
> source releases continue to use -kv; then likely every end-user will
> get the same sources. I'd volunteer to review the core sources (and
> produce patches) if that is desired.

I'm not sure if it's a matter of "continue to use" -- as I said, 1.5.2
and later releases haven't used -kv.

Nevertheless, patches to fix this will be most welcome.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From tim.one@home.com  Sat Mar 24 20:49:46 2001
From: tim.one@home.com (Tim Peters)
Date: Sat, 24 Mar 2001 15:49:46 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14goMG-0006bL-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEBMJIAA.tim.one@home.com>

[Moshe]
> ...
> I just went through the core grepping for '$Revision' and it seems
> this is the only place this happens -- all the other places either put
> the default version (RCS cruft and all), or are smart about handling it.

Hmm.  Unless it's in a *comment*, I expect most uses are dubious.  Clear
example, from the new Lib/unittest.py:

__version__ = "$Revision: 1.2 $"[11:-2]

Presumably that's yielding an empty string under the new tarball release.

One of a dozen fuzzy examples, from pickle.py:

__version__ = "$Revision: 1.46 $"       # Code version

The module makes no other use of this, and since it's not in a comment I have
to presume that the author *intended* clients to access pickle.__version__
directly.  But, if so, they've been getting the $Revision business for years,
so changing the released format now could break users' code.

> ...
> However, whatever we do the Windows build and the UNIX build must be
> the same.

*Sounds* good <wink>.

> I think it should be possible to build the Windows version from the
> .tgz and that is what (IMHO) should happen, instead of Tim and Guido
> exporting from the CVS independantly. This would stop problems like the
> one Tim and I had this (my time) morning.

Ya, sounds good too.  A few things against it:  The serialization would add
hours to the release process, in part because I get a lot of testing done
now, on the Python I install *from* the Windows installer I build, while the
other guys are finishing the .tgz business (note that Guido doesn't similarly
run tests on a Python built from the tarball, else he would have caught this
problem before you!).

Also in part because the Windows installer is not a simple packaging of the
source tree:  the Windows version also ships with pre-compiled components for
Tcl/Tk, zlib, bsddb and pyexpat.  The source for that stuff doesn't come in
the tarball; it has to be sprinkled "by hand" into the source tree.

The last gets back to Guido's point, which is also a good one:  if the
Windows release gets built from a tree I've used for the very first time a
couple hours before the release, the higher the odds that a process screwup
gets overlooked.

To date, there have been no "process bugs" in the Windows build process, and
I'd be loathe to give that up.  Building from the tree I use every day is ...
reassuring.

At heart, I don't much like the idea of using source revision numbers as code
version numbers anyway -- "New and Improved!  Version 1.73 stripped a
trailing space from line 239!" <wink>.

more-info-than-anyone-needs-to-know-ly y'rs  - tim



From paul@pfdubois.com  Sat Mar 24 22:14:03 2001
From: paul@pfdubois.com (Paul F. Dubois)
Date: Sat, 24 Mar 2001 14:14:03 -0800
Subject: [Python-Dev] distutils change breaks code, Pyfort
Message-ID: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>

The requirement of a version argument to the distutils command breaks Pyfort
and many of my existing packages. These packages are not intended for use
with the distribution commands and a package version number would be
meaningless.

I will make a new Pyfort that supplies a version number to the call it makes
to setup. However, I think this change to distutils is a poor idea. If the
version number would be required for the distribution commands, let *them*
complain, perhaps by setting a default value of time.asctime(time.gmtime())
or something that the distribution commands could object to.

I apologize if I missed an earlier discussion of this change that seems to
be in 2.1b2 but not 2.1b1, as I am new to this list.

Paul




From jafo@tummy.com  Sat Mar 24 23:17:35 2001
From: jafo@tummy.com (Sean Reifschneider)
Date: Sat, 24 Mar 2001 16:17:35 -0700
Subject: [Python-Dev] RFC: PEP243: Module Repository Upload Mechanism
Message-ID: <20010324161735.A19818@tummy.com>

Included below is the version of PEP243 after it's initial round of review.
I welcome any feedback.

Thanks,
Sean

============================================================================
PEP: 243
Title: Module Repository Upload Mechanism
Version: $Revision$
Author: jafo-pep@tummy.com (Sean Reifschneider)
Status: Draft
Type: Standards Track
Created: 18-Mar-2001
Python-Version: 2.1
Post-History: 
Discussions-To: distutils-sig@python.org


Abstract

    For a module repository system (such as Perl's CPAN) to be
    successful, it must be as easy as possible for module authors to
    submit their work.  An obvious place for this submit to happen is
    in the Distutils tools after the distribution archive has been
    successfully created.  For example, after a module author has
    tested their software (verifying the results of "setup.py sdist"),
    they might type "setup.py sdist --submit".  This would flag
    Distutils to submit the source distribution to the archive server
    for inclusion and distribution to the mirrors.

    This PEP only deals with the mechanism for submitting the software
    distributions to the archive, and does not deal with the actual
    archive/catalog server.


Upload Process

    The upload will include the Distutils "PKG-INFO" meta-data
    information (as specified in PEP-241 [1]), the actual software
    distribution, and other optional information.  This information
    will be uploaded as a multi-part form encoded the same as a
    regular HTML file upload request.  This form is posted using
    ENCTYPE="multipart/form-data" encoding [RFC1867].

    The upload will be made to the host "modules.python.org" on port
    80/tcp (POST http://modules.python.org:80/swalowpost.cgi).  The form
    will consist of the following fields:

        distribution -- The file containing the module software (for
        example, a .tar.gz or .zip file).

        distmd5sum -- The MD5 hash of the uploaded distribution,
        encoded in ASCII representing the hexadecimal representation
        of the digest ("for byte in digest: s = s + ('%02x' %
        ord(byte))").

        pkginfo (optional) -- The file containing the distribution
        meta-data (as specified in PEP-241 [1]).  Note that if this is not
        included, the distribution file is expected to be in .tar format
        (gzipped and bzipped compreesed are allowed) or .zip format, with a
        "PKG-INFO" file in the top-level directory it extracts
        ("package-1.00/PKG-INFO").

        infomd5sum (required if pkginfo field is present) -- The MD5 hash
        of the uploaded meta-data, encoded in ASCII representing the
        hexadecimal representation of the digest ("for byte in digest:
        s = s + ('%02x' % ord(byte))").

        platform (optional) -- A string representing the target
        platform for this distribution.  This is only for binary
        distributions.  It is encoded as
        "<os_name>-<os_version>-<platform architecture>-<python
        version>".

        signature (optional) -- A OpenPGP-compatible signature [RFC2440]
        of the uploaded distribution as signed by the author.  This may be
        used by the cataloging system to automate acceptance of uploads.

        protocol_version -- A string indicating the protocol version that
        the client supports.  This document describes protocol version "1".


Return Data

    The status of the upload will be reported using HTTP non-standard
    ("X-*)" headers.  The "X-Swalow-Status" header may have the following
    values:

        SUCCESS -- Indicates that the upload has succeeded.

        FAILURE -- The upload is, for some reason, unable to be
        processed.

        TRYAGAIN -- The server is unable to accept the upload at this
        time, but the client should try again at a later time.
        Potential causes of this are resource shortages on the server,
        administrative down-time, etc...

    Optionally, there may be a "X-Swalow-Reason" header which includes a
    human-readable string which provides more detailed information about
    the "X-Swalow-Status".

    If there is no "X-Swalow-Status" header, or it does not contain one of
    the three strings above, it should be treated as a temporary failure.

    Example:

        >>> f = urllib.urlopen('http://modules.python.org:80/swalowpost.cgi')
        >>> s = f.headers['x-swalow-status']
        >>> s = s + ': ' + f.headers.get('x-swalow-reason', '<None>')
        >>> print s
        FAILURE: Required field "distribution" missing.


Sample Form

    The upload client must submit the page in the same form as
    Netscape Navigator version 4.76 for Linux produces when presented
    with the following form:

        <H1>Upload file</H1>
        <FORM NAME="fileupload" METHOD="POST" ACTION="swalowpost.cgi"
              ENCTYPE="multipart/form-data">
        <INPUT TYPE="file" NAME="distribution"><BR>
        <INPUT TYPE="text" NAME="distmd5sum"><BR>
        <INPUT TYPE="file" NAME="pkginfo"><BR>
        <INPUT TYPE="text" NAME="infomd5sum"><BR>
        <INPUT TYPE="text" NAME="platform"><BR>
        <INPUT TYPE="text" NAME="signature"><BR>
        <INPUT TYPE="hidden" NAME="protocol_version" VALUE="1"><BR>
        <INPUT TYPE="SUBMIT" VALUE="Upload">
        </FORM>


Platforms

    The following are valid os names:

        aix beos debian dos freebsd hpux mac macos mandrake netbsd
        openbsd qnx redhat solaris suse windows yellowdog

    The above include a number of different types of distributions of
    Linux.  Because of versioning issues these must be split out, and
    it is expected that when it makes sense for one system to use
    distributions made on other similar systems, the download client
    will make the distinction.

    Version is the official version string specified by the vendor for
    the particular release.  For example, "2000" and "nt" (Windows),
    "9.04" (HP-UX), "7.0" (RedHat, Mandrake).

    The following are valid architectures:

        alpha hppa ix86 powerpc sparc ultrasparc


Status

    I currently have a proof-of-concept client and server implemented.
    I plan to have the Distutils patches ready for the 2.1 release.
    Combined with Andrew's PEP-241 [1] for specifying distribution
    meta-data, I hope to have a platform which will allow us to gather
    real-world data for finalizing the catalog system for the 2.2
    release.


References

    [1] Metadata for Python Software Package, Kuchling,
        http://python.sourceforge.net/peps/pep-0241.html

    [RFC1867] Form-based File Upload in HTML
        http://www.faqs.org/rfcs/rfc1867.html

    [RFC2440] OpenPGP Message Format
        http://www.faqs.org/rfcs/rfc2440.html


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:
-- 
 A smart terminal is not a smart*ass* terminal, but rather a terminal
 you can educate.  -- Rob Pike
Sean Reifschneider, Inimitably Superfluous <jafo@tummy.com>
tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python


From martin@loewis.home.cs.tu-berlin.de  Sun Mar 25 00:47:26 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 25 Mar 2001 01:47:26 +0100
Subject: [Python-Dev] distutils change breaks code, Pyfort
Message-ID: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>

> The  requirement of  a  version argument  to  the distutils  command
> breaks Pyfort and  many of my existing packages.  These packages are
> not intended  for use with  the distribution commands and  a package
> version number would be meaningless.

So  this  is  clearly  an  incompatible  change.  According  with  the
procedures in PEP 5, there  should be a warning issued before aborting
setup. Later  (major) releases of  Python, or distutils,  could change
the warning into an error.

Nevertheless, I agree with the  change in principal. Distutils can and
should  enforce a  certain  amount  of policy;  among  this, having  a
version number sounds like a  reasonable requirement - even though its
primary  use is for  building (and  uploading) distributions.  Are you
saying that  Pyfort does not have a  version number? On SF,  I can get
version 6.3...

Regards,
Martin


From paul@pfdubois.com  Sun Mar 25 01:43:52 2001
From: paul@pfdubois.com (Paul F. Dubois)
Date: Sat, 24 Mar 2001 17:43:52 -0800
Subject: [Python-Dev] RE: distutils change breaks code, Pyfort
In-Reply-To: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>
Message-ID: <ADEOIFHFONCLEEPKCACCAEDNCHAA.paul@pfdubois.com>

Pyfort is the kind of package the change was intended for, and it does have
a version number. But I have other packages, that cannot stand on their own,
that are part of a bigger suite of packages, and dist is never going to be
used. They don't have a MANIFEST, etc. The setup.py file is used instead of
a Makefile. I don't think that it is logical to require a version number
that is not used in that case. We also raise the "entry fee" for learning to
use Distutils or starting a new package.

In the case of Pyfort there is NO setup.py, it is just running a command on
the fly. But I've already fixed it with version 6.3.

I think we have all focused on the public distribution problem but in fact
Distutils is just great as an internal tool for building large software
projects and that is how I use it. I agree that if I want to use sdist,
bdist etc. that I need to set the version. But then, I need to do other
things too in that case.

-----Original Message-----
From: Martin v. Loewis [mailto:martin@loewis.home.cs.tu-berlin.de]
Sent: Saturday, March 24, 2001 4:47 PM
To: paul@pfdubois.com
Cc: python-dev@python.org
Subject: distutils change breaks code, Pyfort


> The  requirement of  a  version argument  to  the distutils  command
> breaks Pyfort and  many of my existing packages.  These packages are
> not intended  for use with  the distribution commands and  a package
> version number would be meaningless.

So  this  is  clearly  an  incompatible  change.  According  with  the
procedures in PEP 5, there  should be a warning issued before aborting
setup. Later  (major) releases of  Python, or distutils,  could change
the warning into an error.

Nevertheless, I agree with the  change in principal. Distutils can and
should  enforce a  certain  amount  of policy;  among  this, having  a
version number sounds like a  reasonable requirement - even though its
primary  use is for  building (and  uploading) distributions.  Are you
saying that  Pyfort does not have a  version number? On SF,  I can get
version 6.3...

Regards,
Martin



From barry@digicool.com  Sun Mar 25 03:06:21 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 22:06:21 -0500
Subject: [Python-Dev] RE: distutils change breaks code, Pyfort
References: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>
 <ADEOIFHFONCLEEPKCACCAEDNCHAA.paul@pfdubois.com>
Message-ID: <15037.24749.117157.228368@anthem.wooz.org>

>>>>> "PFD" == Paul F Dubois <paul@pfdubois.com> writes:

    PFD> I think we have all focused on the public distribution
    PFD> problem but in fact Distutils is just great as an internal
    PFD> tool for building large software projects and that is how I
    PFD> use it.

I've used it this way too, and you're right, it's great for this.
Esp. for extensions, it's much nicer than fiddling with
Makefile.pre.in's etc.  So I think I agree with you about the version
numbers and other required metadata -- or at least, there should be an
escape.

-Barry


From tim.one@home.com  Sun Mar 25 05:07:20 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 25 Mar 2001 00:07:20 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010321214432.A25810@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>

[Neil Schemenauer]
> Apparently they [Icon-style generators] are good for lots of other
> things too.  Tonight I implemented passing values using resume().
>  Next, I decided to see if I had enough magic juice to tackle the
> coroutine example from Gordon's stackless tutorial.  Its turns out
> that I didn't need the extra functionality.  Generators are enough.
>
> The code is not too long so I've attached it.  I figure that some
> people might need a break from 2.1 release issues.

I'm afraid we were buried alive under them at the time, and I don't want this
one to vanish in the bit bucket!

> I think the generator version is even simpler than the coroutine
> version.
>
> [Example code for the Dahl/Hoare "squasher" program elided -- see
>  the archive]

This raises a potentially interesting point:  is there *any* application of
coroutines for which simple (yield-only-to-immediate-caller) generators
wouldn't suffice, provided that they're explicitly resumable?

I suspect there isn't.  If you give me a coroutine program, and let me add a
"control loop", I can:

1. Create an Icon-style generator for each coroutine "before the loop".

2. Invoke one of the coroutines "before the loop".

3. Replace each instance of

       coroutine_transfer(some_other_coroutine, some_value)

   within the coroutines by

       yield some_other_coroutine, some_value

4. The "yield" then returns to the control loop, which picks apart
   the tuple to find the next coroutine to resume and the value to
   pass to it.

This starts to look a lot like uthreads, but built on simple generator
yield/resume.

It loses some things:

A. Coroutine A can't *call* routine B and have B do a co-transfer
   directly.  But A *can* invoke B as a generator and have B yield
   back to A, which in turn yields back to its invoker ("the control
   loop").

B. As with recursive Icon-style generators, a partial result generated
   N levels deep in the recursion has to suspend its way thru N
   levels of frames, and resume its way back down N levels of frames
   to get moving again.  Real coroutines can transmit results directly
   to the ultimate consumer.

OTOH, it may gain more than it loses:

A. Simple to implement in CPython without threads, and at least
   possible likewise even for Jython.

B. C routines "in the middle" aren't necessarily show-stoppers.  While
   they can't exploit Python's implementation of generators directly,
   they *could* participate in the yield/resume *protocol*, acting "as
   if" they were Python routines.  Just like Python routines have to
   do today, C routines would have to remember their own state and
   arrange to save/restore it appropriately across calls (but to the
   C routines, they *are* just calls and returns, and nothing trickier
   than that -- their frames truly vanish when "suspending up", so
   don't get in the way).

the-meek-shall-inherit-the-earth<wink>-ly y'rs  - tim



From nas@arctrix.com  Sun Mar 25 05:47:48 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Sat, 24 Mar 2001 21:47:48 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>; from tim.one@home.com on Sun, Mar 25, 2001 at 12:07:20AM -0500
References: <20010321214432.A25810@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>
Message-ID: <20010324214748.A32161@glacier.fnational.com>

On Sun, Mar 25, 2001 at 12:07:20AM -0500, Tim Peters wrote:
> If you give me a coroutine program, and let me add a "control
> loop", ...

This is exactly what I started doing when I was trying to rewrite
your Coroutine.py module to use generators.

> A. Simple to implement in CPython without threads, and at least
>    possible likewise even for Jython.

I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
and frame.resume() low level interface is nice.  I think Jython
must know which frames are going to be suspended at compile time.
That makes it hard to build higher level control abstractions.  I
don't know much about Jython though so maybe there's another way.
In any case it should be possible to use threads to implement
some common higher level interfaces.

  Neil


From tim.one@home.com  Sun Mar 25 06:11:58 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 25 Mar 2001 01:11:58 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010324214748.A32161@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>

[Tim]
>> If you give me a coroutine program, and let me add a "control
>> loop", ...

[Neil Schemenauer]
> This is exactly what I started doing when I was trying to rewrite
> your Coroutine.py module to use generators.

Ya, I figured as much -- for a Canadian, you don't drool much <wink>.

>> A. Simple to implement in CPython without threads, and at least
>>    possible likewise even for Jython.

> I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
> and frame.resume() low level interface is nice.  I think Jython
> must know which frames are going to be suspended at compile time.

Yes, Samuele said as much.  My belief is that generators don't become *truly*
pleasant unless "yield" ("suspend"; whatever) is made a new statement type.
Then Jython knows exactly where yields can occur.  As in CLU (but not Icon),
it would also be fine by me if routines *used* as generators also needed to
be explicitly marked as such (this is a non-issue in Icon because *every*
Icon expression "is a generator" -- there is no other kind of procedure
there).

> That makes it hard to build higher level control abstractions.
> I don't know much about Jython though so maybe there's another way.
> In any case it should be possible to use threads to implement
> some common higher level interfaces.

What I'm wondering is whether I care <0.4 wink>.  I agreed with you, e.g.,
that your squasher example was more pleasant to read using generators than in
its original coroutine form.  People who want to invent brand new control
structures will be happier with Scheme anyway.



From tim.one@home.com  Sun Mar 25 08:07:09 2001
From: tim.one@home.com (Tim Peters)
Date: Sun, 25 Mar 2001 03:07:09 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation
In-Reply-To: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDGJIAA.tim.one@home.com>

[Tim]
>> The correspondent I quoted believed the latter ["simple" generators]
>> were on-target for XSLT work ... But ... I don't know whether they're
>> sufficient for what you have in mind.

[Uche Ogbuji]
> Based on a discussion with Christian at IPC9, they are.  I should
> have been more clear about that.  My main need is to be able to change
> a bit of context and invoke a different execution path, without going
> through the full overhead of a function call.  XSLT, if written
> naturally", tends to involve huge numbers of such tweak-context-and-
> branch operations.
> ...
> Suspending only to the invoker should do the trick because it is
> typically a single XSLT instruction that governs multiple tree-
> operations with varied context.

Thank you for explaining more!  It's helpful.

> At IPC9, Guido put up a poll of likely use of stackless features,
> and it was a pretty clear arithmetic progression from those who
> wanted to use microthreads, to those who wanted co-routines, to
> those who wanted just generators.  The generator folks were
> probably 2/3 of the assembly.  Looks as if many have decided,
> and they seem to agree with you.

They can't:  I haven't taken a position <0.5 wink>.  As I said, I'm trying to
get closer to understanding the cost/benefit tradeoffs here.

I've been nagging in favor of simple generators for a decade now, and every
time I've tried they've gotten hijacked by some grander scheme with much
muddier tradeoffs.  That's been very frustrating, since I've had good uses
for simple generators darned near every day of my Python life, and "the only
thing stopping them" has been a morbid fascination with Scheme's mistakes
<wink>.  That phase appears to be over, and *now* "the only thing stopping
them" appears to be a healthy fascination with coroutines and uthreads.
That's cool, although this is definitely a "the perfect is the enemy of the
good" kind of thing.

trying-leave-a-better-world-for-the-children<wink>-ly y'rs  - tim



From paulp@ActiveState.com  Sun Mar 25 18:30:34 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Sun, 25 Mar 2001 10:30:34 -0800
Subject: [Python-Dev] Making types behave like classes
References: <3ABC5EE9.2943.14C818C7@localhost>
Message-ID: <3ABE3949.DE50540C@ActiveState.com>

Gordon McMillan wrote:
> 
>...
> 
> Those are introspective; typically read in the interactive
> interpreter. I can't do anything with them except read them.
>
> If you wrap, eg, __len__, what can I do with it except call it? 

You can store away a reference to it and then call it later.

I
> can already do that with len().
> 
> > Benefits:
> >
> >  * objects based on extension types would "look more like"
> >  classes to
> > Python programmers so there is less confusion about how they are
> > different
> 
> I think it would probably enhance confusion to have the "look
> more like" without "being more like".

Looking more like is the same as being more like. In other words, there
are a finite list of differences in behavior between types and classes
and I think we should chip away at them one by one with each release of
Python.

Do you think that there is a particular difference (perhaps relating to
subclassing) that is the "real" difference and the rest are just
cosmetic?

> >  * users could stop using the type() function to get concrete
> >  types and
> > instead use __class__. After a version or two, type() could be
> > formally deprecated in favor of isinstance and __class__.
> 
> __class__ is a callable object. It has a __name__. From the
> Python side, a type isn't much more than an address. 

Type objects also have names. They are not (yet) callable but I cannot
think of a circumstance in which that would matter. It would require
code like this:

cls = getattr(foo, "__class__", None)
if cls:
    cls(...)

I don't know where the arglist for cls would come from. In general, I
can't imagine what the goal of this code would be. I can see code like
this in a "closed world" situation where I know all of the classes
involved, but I can't imagine a case where this kind of code will work
with any old class.

Anyhow, I think that type objects should be callable just like
classes...but I'm trying to pick off low-hanging fruit first. I think
that the less "superficial" differences there are between types and
classes, the easier it becomes to tackle the deep differences because
more code out there will be naturally polymorphic instead of using: 

if type(obj) is InstanceType: 
	do_onething() 
else: 
	do_anotherthing()

That is an evil pattern if we are going to merge types and classes.

> Until
> Python's object model is redone, there are certain objects for
> which type(o) and o.__class__ return quite different things.

I am very nervous about waiting for a big-bang re-model of the object
model.

>...
> The major lesson I draw from ExtensionClass and friends is
> that achieving this behavior in today's Python is horrendously
> complex and fragile. Until we can do it right, I'd rather keep it
> simple (and keep the warts on the surface).

I'm trying to find an incremental way forward because nobody seems to
have time or energy for a big bang.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From greg@cosc.canterbury.ac.nz  Sun Mar 25 21:53:02 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 26 Mar 2001 09:53:02 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <m14gEEA-000CnEC@artcom0.artcom-gmbh.de>
Message-ID: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz>

pf@artcom-gmbh.de (Peter Funk):

> All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> in this respect.

I don't think you can call that a "flaw", given that these
filemanagers are only designed to deal with Unix file systems.

I think it's reasonable to only expect things in the platform
os module to deal with the platform's native file system.
Trying to anticipate how every platform's cross-platform
file servers for all other platforms are going to store their
data just isn't practical.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From guido@digicool.com  Mon Mar 26 02:03:52 2001
From: guido@digicool.com (Guido van Rossum)
Date: Sun, 25 Mar 2001 21:03:52 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: Your message of "Mon, 26 Mar 2001 09:53:02 +1200."
 <200103252153.JAA09102@s454.cosc.canterbury.ac.nz>
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz>
Message-ID: <200103260203.VAA05048@cj20424-a.reston1.va.home.com>

> > All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> > in this respect.
> 
> I don't think you can call that a "flaw", given that these
> filemanagers are only designed to deal with Unix file systems.
> 
> I think it's reasonable to only expect things in the platform
> os module to deal with the platform's native file system.
> Trying to anticipate how every platform's cross-platform
> file servers for all other platforms are going to store their
> data just isn't practical.

You say that now, but as such cross-system servers become more common,
we should expect the tools to deal with them well, rather than
complain "the other guy doesn't play by our rules".

--Guido van Rossum (home page: http://www.python.org/~guido/)


From gmcm@hypernet.com  Mon Mar 26 02:44:59 2001
From: gmcm@hypernet.com (Gordon McMillan)
Date: Sun, 25 Mar 2001 21:44:59 -0500
Subject: [Python-Dev] Making types behave like classes
In-Reply-To: <3ABE3949.DE50540C@ActiveState.com>
Message-ID: <3ABE66DB.18389.1CB7239A@localhost>

[Gordon]
> > I think it would probably enhance confusion to have the "look
> > more like" without "being more like".
[Paul] 
> Looking more like is the same as being more like. In other words,
> there are a finite list of differences in behavior between types
> and classes and I think we should chip away at them one by one
> with each release of Python.

There's only one difference that matters: subclassing. I don't 
think there's an incremental path to that that leaves Python 
"easily extended".

[Gordon]
> > __class__ is a callable object. It has a __name__. From the
> > Python side, a type isn't much more than an address. 
> 
> Type objects also have names. 

But not a __name__.

> They are not (yet) callable but I
> cannot think of a circumstance in which that would matter. 

Take a look at copy.py.

> Anyhow, I think that type objects should be callable just like
> classes...but I'm trying to pick off low-hanging fruit first. I
> think that the less "superficial" differences there are between
> types and classes, the easier it becomes to tackle the deep
> differences because more code out there will be naturally
> polymorphic instead of using: 
> 
> if type(obj) is InstanceType: 
>  do_onething() 
> else: 
>  do_anotherthing()
> 
> That is an evil pattern if we are going to merge types and
> classes.

And it would likely become:
 if callable(obj.__class__):
   ....

Explicit is better than implicit for warts, too.
 


- Gordon


From moshez@zadka.site.co.il  Mon Mar 26 10:27:37 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 26 Mar 2001 12:27:37 +0200
Subject: [Python-Dev] sandbox?
Message-ID: <E14hUDp-0003tf-00@darjeeling>

I remember there was the discussion here about sandbox, but
I'm not sure I understand the rules. Checkin without asking
permission to sandbox ok? Just make my private dir and checkin
stuff?

Anybody who feels he can speak with authority is welcome ;-)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From mwh21@cam.ac.uk  Mon Mar 26 13:18:26 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 26 Mar 2001 14:18:26 +0100
Subject: [Python-Dev] Re: Alleged deprecation of shutils
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com>
Message-ID: <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido@digicool.com> writes:

> > > All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> > > in this respect.
> > 
> > I don't think you can call that a "flaw", given that these
> > filemanagers are only designed to deal with Unix file systems.
> > 
> > I think it's reasonable to only expect things in the platform
> > os module to deal with the platform's native file system.
> > Trying to anticipate how every platform's cross-platform
> > file servers for all other platforms are going to store their
> > data just isn't practical.
> 
> You say that now, but as such cross-system servers become more common,
> we should expect the tools to deal with them well, rather than
> complain "the other guy doesn't play by our rules".

So, a goal for 2.2: getting moving/copying/deleting of files and
directories working properly (ie. using native APIs) on all major
supported platforms, with all the legwork that implies.  We're not
really very far from this now, are we?  Perhaps (the functionality of)
shutil.{rmtree,copy,copytree} should move into os and if necessary be
implemented in nt or dos or mac or whatever.  Any others?

Cheers,
M.

-- 
39. Re graphics:  A picture is worth 10K  words - but only those
    to describe the picture. Hardly any sets of 10K words can be
    adequately described with pictures.
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From jack@oratrix.nl  Mon Mar 26 14:26:41 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Mon, 26 Mar 2001 16:26:41 +0200
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: Message by Michael Hudson <mwh21@cam.ac.uk> ,
 26 Mar 2001 14:18:26 +0100 , <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <20010326142642.48DE836B2C0@snelboot.oratrix.nl>

> > You say that now, but as such cross-system servers become more common,
> > we should expect the tools to deal with them well, rather than
> > complain "the other guy doesn't play by our rules".
> 
> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.

Well, if we want to support the case Guido sketches, a machine on one platform 
being fileserver for another platform, things may well be bleak.

For instance, most Apple-fileservers for Unix will use the .HSResource 
directory to store resource forks and the .HSancillary file to store mac 
file-info, but not all do. I didn't try it yet, but from what I've read MacOSX 
over NFS uses a different scheme.

But, all that said, if we look only at a single platform the basic 
functionality of shutils should work. There's a Mac module (macostools) that 
has most of the functionality, but of course not all, and it has some extra as 
well, and not all names are the same (shutil compatibility wasn't a goal when 
it was written).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 




From guido@digicool.com  Mon Mar 26 14:33:00 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 09:33:00 -0500
Subject: [Python-Dev] sandbox?
In-Reply-To: Your message of "Mon, 26 Mar 2001 12:27:37 +0200."
 <E14hUDp-0003tf-00@darjeeling>
References: <E14hUDp-0003tf-00@darjeeling>
Message-ID: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>

> I remember there was the discussion here about sandbox, but
> I'm not sure I understand the rules. Checkin without asking
> permission to sandbox ok? Just make my private dir and checkin
> stuff?
> 
> Anybody who feels he can speak with authority is welcome ;-)

We appreciate it if you ask first, but yes, sandbox is just what it
says.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Mon Mar 26 15:32:09 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 10:32:09 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: Your message of "26 Mar 2001 14:18:26 +0100."
 <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com>
 <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <200103261532.KAA06398@cj20424-a.reston1.va.home.com>

> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.  We're not
> really very far from this now, are we?  Perhaps (the functionality of)
> shutil.{rmtree,copy,copytree} should move into os and if necessary be
> implemented in nt or dos or mac or whatever.  Any others?

Given that it's currently in shutil, please just consider improving
that, unless you believe that the basic API should be completely
different.  This sounds like something PEP-worthy!

--Guido van Rossum (home page: http://www.python.org/~guido/)


From moshez@zadka.site.co.il  Mon Mar 26 15:49:10 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Mon, 26 Mar 2001 17:49:10 +0200
Subject: [Python-Dev] sandbox?
In-Reply-To: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>
References: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>, <E14hUDp-0003tf-00@darjeeling>
Message-ID: <E14hZF0-0004Mj-00@darjeeling>

On Mon, 26 Mar 2001 09:33:00 -0500, Guido van Rossum <guido@digicool.com> wrote:
 
> We appreciate it if you ask first, but yes, sandbox is just what it
> says.

OK, thanks.
I want to checkin my Rational class to the sandbox, probably make
a directory rational/ and put it there.
 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From jeremy@alum.mit.edu  Mon Mar 26 17:57:26 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Mon, 26 Mar 2001 12:57:26 -0500 (EST)
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
References: <20010324214748.A32161@glacier.fnational.com>
 <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
Message-ID: <15039.33542.399553.604556@slothrop.digicool.com>

>>>>> "TP" == Tim Peters <tim.one@home.com> writes:

  >> I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
  >> and frame.resume() low level interface is nice.  I think Jython
  >> must know which frames are going to be suspended at compile time.

  TP> Yes, Samuele said as much.  My belief is that generators don't
  TP> become *truly* pleasant unless "yield" ("suspend"; whatever) is
  TP> made a new statement type.  Then Jython knows exactly where
  TP> yields can occur.  As in CLU (but not Icon), it would also be
  TP> fine by me if routines *used* as generators also needed to be
  TP> explicitly marked as such (this is a non-issue in Icon because
  TP> *every* Icon expression "is a generator" -- there is no other
  TP> kind of procedure there).

If "yield" is a keyword, then any function that uses yield is a
generator.  With this policy, it's straightforward to determine which
functions are generators at compile time.  It's also Pythonic:
Assignment to a name denotes local scope; use of yield denotes
generator. 

Jeremy


From jeremy@digicool.com  Mon Mar 26 19:49:31 2001
From: jeremy@digicool.com (Jeremy Hylton)
Date: Mon, 26 Mar 2001 14:49:31 -0500 (EST)
Subject: [Python-Dev] SF bugs tracker?
Message-ID: <15039.40267.489930.186757@localhost.localdomain>

I've been unable to reach the bugs tracker today.  Every attempt
results in a document-contains-no-data error.  Has anyone else had any
luck?

Jeremy



From jack@oratrix.nl  Mon Mar 26 19:55:40 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Mon, 26 Mar 2001 21:55:40 +0200
Subject: [Python-Dev] test_coercion failing
In-Reply-To: Message by "Tim Peters" <tim.one@home.com> ,
 Wed, 21 Mar 2001 15:18:54 -0500 , <LNBBLJKPBEHFEDALKOLCMEEOJHAA.tim.one@home.com>
Message-ID: <20010326195546.238C0EDD21@oratrix.oratrix.nl>

Well, it turns out that disabling fused-add-mul indeed fixes the
problem. The CodeWarrior manual warns that results may be slightly
different with and without fused instructions, but the example they
give is with operations apparently done in higher precision with the
fused instructions. No word about nonstandard behaviour for +0.0 and
-0.0.

As this seems to be a PowerPC issue, not a MacOS issue, it is
something that other PowerPC porters may want to look out for too
(does AIX still exist?).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 


From guido@digicool.com  Mon Mar 26 08:14:14 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 03:14:14 -0500
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: Your message of "Mon, 26 Mar 2001 14:49:31 EST."
 <15039.40267.489930.186757@localhost.localdomain>
References: <15039.40267.489930.186757@localhost.localdomain>
Message-ID: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>

> I've been unable to reach the bugs tracker today.  Every attempt
> results in a document-contains-no-data error.  Has anyone else had any
> luck?

This is a bizarre SF bug.  When you're browsing patches, clicking on
Bugs will give you this error, and vice versa.

My workaround: go to my personal page, click on a bug listed there,
and make an empty change (i.e. click Submit Changes without making any
changes).  This will present the Bugs browser.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Mon Mar 26 09:46:48 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 04:46:48 -0500
Subject: [Python-Dev] WANTED: chairs for next Python conference
Message-ID: <200103260946.EAA02170@cj20424-a.reston1.va.home.com>

I'm looking for chairs for the next Python conference.  At least the
following positions are still open: BOF chair (new!), Application
track chair, Tools track chair.  (The Apps and Tools tracks are
roughly what the Zope and Apps tracks were this year.)  David Ascher
is program chair, I am conference chair (again).

We're in the early stages of conference organization; Foretec is
looking at having it in a Southern city in the US, towards the end of
February 2002.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From paulp@ActiveState.com  Mon Mar 26 22:06:42 2001
From: paulp@ActiveState.com (Paul Prescod)
Date: Mon, 26 Mar 2001 14:06:42 -0800
Subject: [Python-Dev] Making types behave like classes
References: <3ABE66DB.18389.1CB7239A@localhost>
Message-ID: <3ABFBD72.30F69817@ActiveState.com>

Gordon McMillan wrote:
> 
>..
> 
> There's only one difference that matters: subclassing. I don't
> think there's an incremental path to that that leaves Python
> "easily extended".

All of the differences matter! Inconsistency is a problem in and of
itself.

> But not a __name__.

They really do have __name__s. Try it. type("").__name__

> 
> > They are not (yet) callable but I
> > cannot think of a circumstance in which that would matter.
> 
> Take a look at copy.py.

copy.py only expects the type object to be callable WHEN there is a
getinitargs method. Types won't have this method so it won't use the
class callably. Plus, the whole section only gets run for objects of
type InstanceType.

The important point is that it is not useful to know that __class__ is
callable without knowing the arguments it takes. __class__ is much more
often used as a unique identifier for pointer equality and/or for the
__name__. In looking through the standard library, I can only see places
that the code would improve if __class__ were available for extension
objects.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook


From tim.one@home.com  Mon Mar 26 22:08:30 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 26 Mar 2001 17:08:30 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010326195546.238C0EDD21@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEHPJIAA.tim.one@home.com>

[Jack Jansen]
> Well, it turns out that disabling fused-add-mul indeed fixes the
> problem. The CodeWarrior manual warns that results may be slightly
> different with and without fused instructions, but the example they
> give is with operations apparently done in higher precision with the
> fused instructions. No word about nonstandard behaviour for +0.0 and
> -0.0.
>
> As this seems to be a PowerPC issue, not a MacOS issue, it is
> something that other PowerPC porters may want to look out for too
> (does AIX still exist?).

The PowerPC architecture's fused instructions are wonderful for experts,
because in a*b+c (assuming IEEE doubles w/ 53 bits of precision) they compute
the a*b part to 106 bits of precision internally, and the add of c gets to
see all of them.  This is great if you *know* c is pretty much the negation
of the high-order 53 bits of the product, because it lets you get at the
*lower* 53 bits too; e.g.,

    hipart = a*b;
    lopart = a*b - hipart;  /* assuming fused mul-sub is generated */

gives a pair of doubles (hipart, lopart) whose mathematical (not f.p.) sum
hipart + lopart is exactly equal to the mathematical (not f.p.) product a*b.
In the hands of an expert, this can, e.g., be used to write ultra-fast
high-precision math libraries:  it gives a very cheap way to get the effect
of computing with about twice the native precision.

So that's the kind of thing they're warning you about:  without the fused
mul-sub, "lopart" above is always computed to be exactly 0.0, and so is
useless.  Contrarily, some fp algorithms *depend* on cancelling out oodles of
leading bits in intermediate results, and in the presence of fused mul-add
deliver totally bogus results.

However, screwing up 0's sign bit has nothing to do with any of that, and if
the HW is producing -0 for a fused (+anything)*(+0)-(+0), it can't be called
anything other than a HW bug (assuming it's not in the to-minus-infinity
rounding mode).

When a given compiler generates fused instructions (when available) is a
x-compiler crap-shoot, and the compiler you're using *could* have generated
them before with the same end result.  There's really nothing portable we can
do in the source code to convince a compiler never to generate them.  So
looks like you're stuck with a compiler switch here.

not-the-outcome-i-was-hoping-for-but-i'll-take-it<wink>-ly y'rs  - tim



From tim.one@home.com  Mon Mar 26 22:08:37 2001
From: tim.one@home.com (Tim Peters)
Date: Mon, 26 Mar 2001 17:08:37 -0500
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>

[Jeremy]
> I've been unable to reach the bugs tracker today.  Every attempt
> results in a document-contains-no-data error.  Has anyone else had any
> luck?

[Guido]
> This is a bizarre SF bug.  When you're browsing patches, clicking on
> Bugs will give you this error, and vice versa.
>
> My workaround: go to my personal page, click on a bug listed there,
> and make an empty change (i.e. click Submit Changes without making any
> changes).  This will present the Bugs browser.

Possibly unique to Netscape?  I've never seen this behavior -- although
sometimes I have trouble getting to *patches*, but only when logged in.

clear-the-cache-and-reboot<wink>-ly y'rs  - tim



From moshez@zadka.site.co.il  Mon Mar 26 22:26:44 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Tue, 27 Mar 2001 00:26:44 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
Message-ID: <E14hfRk-00051d-00@darjeeling>

Greetings, earthlings!

As Guido said in the last conference, there is going to be a bugfix release
of Python 2.0, Python 2.0.1. Originally meant to be only a license bugfix
release, comments in the Python community have indicated a need for a real
bugfix release. PEP 6[1] has been written by Aahz, which outlines a procedure
for such releases. With Guido's blessing, I have volunteered to be the
Patch Czar (see the PEP!) for the 2.0.1 release. In this job, I intend
to be feared and hated throughout the Python community -- men will 
tremble to hear the sounds of my footsteps...err...sorry, got sidetracked.

This is the first Python pure bugfix release, and I feel a lot of weight
rests on my shoulders as to whether this experiment is successful. Since
this is the first bugfix release, I intend to be ultra-super-conservative.
I can live with a release that does not fix all the bug, I am very afraid
of a release that breaks a single person's code. Such a thing will give
Python bugfix releases a very bad reputation. So, I am going to be a very
strict Czar.

I will try to follow consistent rules about which patches to integrate,
but I am only human. I will make all my decisions in the public, so they
will be up for review of the community.

There are a few rules I intend to go by

1. No fixes which you have to change your code to enjoy. (E.g., adding a new
   function because the previous API was idiotic)
2. No fixes which have not been applied to the main branch, unless they
   are not relevant to the main branch at all. I much prefer to get a pointer
   to an applied patch or cvs checkin message then a fresh patch. Of course,
   there are cases where this is impossible, so this isn't strict.
3. No fixes which have "stricter checking". Stricter checking is a good
   thing, but not in bug fix releases.
4. No fixes which have a reasonable chance to break someone's code. That
   means that if there's a bug people have a good change of counting on,
   it won't be fix.
5. No "improved documentation/error message" patches. This is stuff that
   gets in people's eyeballs -- I want bugfix upgrade to be as smooth
   as possible.
6. No "internal code was cleaned up". That's a good thing in the development
   branch, but not in bug fix releases.

Note that these rules will *not* be made more lenient, but they might
get stricter, if it seems such strictness is needed in order to make
sure bug fix releases are smooth enough.

However, please remember that this is intended to help you -- the Python
using community. So please, let me know of bugfixes that you need or want
in Python 2.0. I promise that I will consider every request.
Note also, that the Patch Czar is given very few responsibilities ---
all my decisions are subject to Guido's approval. That means that he
gets the final word about each patch.

I intend to post a list of patches I intend to integrate soon -- at the
latest, this Friday, hopefully sooner. I expect to have 2.0.1a1 a week
after that, and further schedule requirements will follow from the
quality of that release. Because it has the dual purpose of also being
a license bugfix release, schedule might be influenced by non-technical
issues. As always, Guido will be the final arbitrator.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From martin@loewis.home.cs.tu-berlin.de  Mon Mar 26 23:00:24 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 27 Mar 2001 01:00:24 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
Message-ID: <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>

> I have volunteered to be the Patch Czar (see the PEP!) for the 2.0.1
> release

Great!

> So please, let me know of bugfixes that you need or want in Python
> 2.0.

In addition to your procedures (which are all very reasonable), I'd
like to point out that Tim has created a 2.0.1 patch class on the SF
patch manager. I hope you find the time to review the patches in there
(which should not be very difficult at the moment). This is meant for
patches which can't be proposed in terms of 'cvs diff' commands; for
mere copying of code from the mainline, this is probably overkill.

Also note that I have started to give a detailed analysis of what
exactly has changed in the NEWS file of the 2.0 maintainance branch -
I'm curious to know what you think about procedure. If you don't like
it, feel free to undo my changes there.

Regards,
Martin


From guido@digicool.com  Mon Mar 26 11:23:08 2001
From: guido@digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 06:23:08 -0500
Subject: [Python-Dev] Release 2.0.1: Heads Up
In-Reply-To: Your message of "Tue, 27 Mar 2001 01:00:24 +0200."
 <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>
References: <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>
Message-ID: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>

> > I have volunteered to be the Patch Czar (see the PEP!) for the 2.0.1
> > release
> 
> Great!

Congratulations to Moshe.

> > So please, let me know of bugfixes that you need or want in Python
> > 2.0.
> 
> In addition to your procedures (which are all very reasonable), I'd
> like to point out that Tim has created a 2.0.1 patch class on the SF
> patch manager. I hope you find the time to review the patches in there
> (which should not be very difficult at the moment). This is meant for
> patches which can't be proposed in terms of 'cvs diff' commands; for
> mere copying of code from the mainline, this is probably overkill.
> 
> Also note that I have started to give a detailed analysis of what
> exactly has changed in the NEWS file of the 2.0 maintainance branch -
> I'm curious to know what you think about procedure. If you don't like
> it, feel free to undo my changes there.

Regardless of what Moshe thinks, *I* think that's a great idea.  I
hope that Moshe continues this.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From aahz@pobox.com (Aahz Maruch)  Mon Mar 26 23:35:55 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Mon, 26 Mar 2001 15:35:55 -0800 (PST)
Subject: [Python-Dev] PEP 6 cleanup
Message-ID: <200103262335.SAA22663@panix3.panix.com>

Now that Moshe has agreed to be Patch Czar for 2.0.1, I'd like some
clarification/advice on a couple of issues before I release the next
draft:

Issues To Be Resolved

    What is the equivalent of python-dev for people who are responsible
    for maintaining Python?  (Aahz proposes either python-patch or
    python-maint, hosted at either python.org or xs4all.net.)

    Does SourceForge make it possible to maintain both separate and
    combined bug lists for multiple forks?  If not, how do we mark bugs
    fixed in different forks?  (Simplest is to simply generate a new bug
    for each fork that it gets fixed in, referring back to the main bug
    number for details.)


From moshez@zadka.site.co.il  Mon Mar 26 23:49:33 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Tue, 27 Mar 2001 01:49:33 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
In-Reply-To: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>
References: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>, <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>
Message-ID: <E14hgjt-0005KI-00@darjeeling>

On Mon, 26 Mar 2001 06:23:08 -0500, Guido van Rossum <guido@digicool.com> wrote:

> > Also note that I have started to give a detailed analysis of what
> > exactly has changed in the NEWS file of the 2.0 maintainance branch -
> > I'm curious to know what you think about procedure. If you don't like
> > it, feel free to undo my changes there.
> 
> Regardless of what Moshe thinks, *I* think that's a great idea.  I
> hope that Moshe continues this.

I will, I think this is a good idea too.
I'm still working on a log to detail the patches I intend to backport
(some will take some effort because of several major overhauls I do
*not* intend to backport, like reindentation and string methods)
I already trimmed it down to 200-something patches I'm going to think
of integrating, and I'm now making a second pass over it. 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From nas@python.ca  Tue Mar 27 04:43:33 2001
From: nas@python.ca (Neil Schemenauer)
Date: Mon, 26 Mar 2001 20:43:33 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>; from tim.one@home.com on Sun, Mar 25, 2001 at 01:11:58AM -0500
References: <20010324214748.A32161@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
Message-ID: <20010326204333.A17390@glacier.fnational.com>

Tim Peters wrote:
> My belief is that generators don't become *truly* pleasant
> unless "yield" ("suspend"; whatever) is made a new statement
> type.

That's fine but how do you create a generator?  I suspose that
using a "yield" statement within a function could make it into a
generator.   Then, calling it would create an instance of a
generator.  Seems a bit too magical to me.

  Neil


From nas@arctrix.com  Tue Mar 27 05:08:24 2001
From: nas@arctrix.com (Neil Schemenauer)
Date: Mon, 26 Mar 2001 21:08:24 -0800
Subject: [Python-Dev] nano-threads?
Message-ID: <20010326210824.B17390@glacier.fnational.com>

Here are some silly bits of code implementing single frame
coroutines and threads using my frame suspend/resume patch.
The coroutine example does not allow a value to be passed but
that would be simple to add.  An updated version of the (very
experimental) patch is here:

    http://arctrix.com/nas/generator3.diff

For me, thinking in terms of frames is quite natural and I didn't
have any trouble writing these examples.  I'm hoping they will be
useful to other people who are trying to get their mind around
continuations.  If your sick of such postings on python-dev flame
me privately and I will stop.  Cheers,

  Neil

#####################################################################
# Single frame threads (nano-threads?).  Output should be:
#
# foo
# bar
# foo
# bar
# bar

import sys

def yield():
    f = sys._getframe(1)
    f.suspend(f)

def run_threads(threads):
    frame = {}
    for t in threads:
        frame[t] = t()
    while threads:
        for t in threads[:]:
            f = frame.get(t)
            if not f:
                threads.remove(t)
            else:
                frame[t] = f.resume()


def foo():
    for x in range(2):
        print "foo"
        yield()

def bar():
    for x in range(3):
        print "bar"
        yield()

def test():
    run_threads([foo, bar])

test()

#####################################################################
# Single frame coroutines.  Should print:
#
# foo
# bar
# baz
# foo
# bar
# baz
# foo
# ...

import sys

def transfer(func):
    f = sys._getframe(1)
    f.suspend((f, func))

def run_coroutines(args):
    funcs = {}
    for f in args:
        funcs[f] = f
    current = args[0]
    while 1:
        rv = funcs[current]()
        if not rv:
            break
        (frame, next) = rv
        funcs[current] = frame.resume
        current = next


def foo():
    while 1:
        print "foo"
        transfer(bar)

def bar():
    while 1:
        print "bar"
        transfer(baz)
        transfer(foo)


From greg@cosc.canterbury.ac.nz  Tue Mar 27 05:48:24 2001
From: greg@cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 27 Mar 2001 17:48:24 +1200 (NZST)
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <15039.33542.399553.604556@slothrop.digicool.com>
Message-ID: <200103270548.RAA09571@s454.cosc.canterbury.ac.nz>

Jeremy Hylton <jeremy@alum.mit.edu>:

> If "yield" is a keyword, then any function that uses yield is a
> generator.  With this policy, it's straightforward to determine which
> functions are generators at compile time.

But a function which calls a function that contains
a "yield" is a generator, too. Does the compiler need
to know about such functions?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg@cosc.canterbury.ac.nz	   +--------------------------------------+


From jeremy@digicool.com  Tue Mar 27 17:06:20 2001
From: jeremy@digicool.com (Jeremy Hylton)
Date: Tue, 27 Mar 2001 12:06:20 -0500 (EST)
Subject: [Python-Dev] distutils change breaks code, Pyfort
In-Reply-To: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>
References: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>
Message-ID: <15040.51340.820929.133487@localhost.localdomain>

>>>>> "PFD" == Paul F Dubois <paul@pfdubois.com> writes:

  PFD> The requirement of a version argument to the distutils command
  PFD> breaks Pyfort and many of my existing packages. These packages
  PFD> are not intended for use with the distribution commands and a
  PFD> package version number would be meaningless.

  PFD> I will make a new Pyfort that supplies a version number to the
  PFD> call it makes to setup. However, I think this change to
  PFD> distutils is a poor idea. If the version number would be
  PFD> required for the distribution commands, let *them* complain,
  PFD> perhaps by setting a default value of
  PFD> time.asctime(time.gmtime()) or something that the distribution
  PFD> commands could object to.

  PFD> I apologize if I missed an earlier discussion of this change
  PFD> that seems to be in 2.1b2 but not 2.1b1, as I am new to this
  PFD> list.

I haven't read any discussion of distutils changes that was discussed
on this list.  It's a good question, though.  Should distutils be
allowed to change between beta releases in a way that breaks user
code?

There are two possibilities:

1. Guido has decided that distutils release cycles need not be related
   to Python release cycles.  He has said as much for pydoc.  If so,
   the timing of the change is just an unhappy coincidence.

2. Distutils is considered to be part of the standard library and
   should follow the same rules as the rest of the library.  No new
   features after the first beta release, just bug fixes.  And no
   incompatible changes without ample warning.

I think that distutils is mature enough to follow the second set of
rules -- and that the change should be reverted before the final
release.

Jeremy



From gward@python.net  Tue Mar 27 17:09:15 2001
From: gward@python.net (Greg Ward)
Date: Tue, 27 Mar 2001 12:09:15 -0500
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Sat, Mar 24, 2001 at 01:02:53PM +0100
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
Message-ID: <20010327120915.A16082@cthulhu.gerg.ca>

On 24 March 2001, Martin von Loewis said:
> There should be a mechanism to tell setup.py not to build a module at
> all. Since it is looking into Modules/Setup anyway, perhaps a
> 
> *excluded*
> dbm
> 
> syntax in Modules/Setup would be appropriate? Of course, makesetup
> needs to be taught such a syntax. Alternatively, an additional
> configuration file or command line options might work.

FWIW, any new "Setup" syntax would also have to be taught to the
'read_setup_file()' function in distutils.extension.

        Greg
-- 
Greg Ward - nerd                                        gward@python.net
http://starship.python.net/~gward/
We have always been at war with Oceania.


From gward@python.net  Tue Mar 27 17:13:35 2001
From: gward@python.net (Greg Ward)
Date: Tue, 27 Mar 2001 12:13:35 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>; from mwh21@cam.ac.uk on Mon, Mar 26, 2001 at 02:18:26PM +0100
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com> <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <20010327121335.B16082@cthulhu.gerg.ca>

On 26 March 2001, Michael Hudson said:
> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.  We're not
> really very far from this now, are we?  Perhaps (the functionality of)
> shutil.{rmtree,copy,copytree} should move into os and if necessary be
> implemented in nt or dos or mac or whatever.  Any others?

The code already exists, in distutils/file_utils.py.  It's just a
question of giving it a home in the main body of the standard library.

(FWIW, the reasons I didn't patch shutil.py are 1) I didn't want to be
constraint by backward compatibility, and 2) I didn't have a time
machine to go back and change shutil.py in all existing 1.5.2
installations.)

        Greg
-- 
Greg Ward - just another /P(erl|ython)/ hacker          gward@python.net
http://starship.python.net/~gward/
No animals were harmed in transmitting this message.


From guido@digicool.com  Tue Mar 27 05:33:46 2001
From: guido@digicool.com (Guido van Rossum)
Date: Tue, 27 Mar 2001 00:33:46 -0500
Subject: [Python-Dev] distutils change breaks code, Pyfort
In-Reply-To: Your message of "Tue, 27 Mar 2001 12:06:20 EST."
 <15040.51340.820929.133487@localhost.localdomain>
References: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>
 <15040.51340.820929.133487@localhost.localdomain>
Message-ID: <200103270533.AAA04707@cj20424-a.reston1.va.home.com>

> >>>>> "PFD" == Paul F Dubois <paul@pfdubois.com> writes:
> 
>   PFD> The requirement of a version argument to the distutils command
>   PFD> breaks Pyfort and many of my existing packages. These packages
>   PFD> are not intended for use with the distribution commands and a
>   PFD> package version number would be meaningless.
> 
>   PFD> I will make a new Pyfort that supplies a version number to the
>   PFD> call it makes to setup. However, I think this change to
>   PFD> distutils is a poor idea. If the version number would be
>   PFD> required for the distribution commands, let *them* complain,
>   PFD> perhaps by setting a default value of
>   PFD> time.asctime(time.gmtime()) or something that the distribution
>   PFD> commands could object to.
> 
>   PFD> I apologize if I missed an earlier discussion of this change
>   PFD> that seems to be in 2.1b2 but not 2.1b1, as I am new to this
>   PFD> list.
> 
> I haven't read any discussion of distutils changes that was discussed
> on this list.  It's a good question, though.  Should distutils be
> allowed to change between beta releases in a way that breaks user
> code?
> 
> There are two possibilities:
> 
> 1. Guido has decided that distutils release cycles need not be related
>    to Python release cycles.  He has said as much for pydoc.  If so,
>    the timing of the change is just an unhappy coincidence.
> 
> 2. Distutils is considered to be part of the standard library and
>    should follow the same rules as the rest of the library.  No new
>    features after the first beta release, just bug fixes.  And no
>    incompatible changes without ample warning.
> 
> I think that distutils is mature enough to follow the second set of
> rules -- and that the change should be reverted before the final
> release.
> 
> Jeremy

I agree.  *Allowing* a version argument is fine.  *Requiring* it is
too late in the game.  (And may be a wrong choice anyway, but I'm not
sure of the issues.)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fdrake@acm.org  Wed Mar 28 14:39:42 2001
From: fdrake@acm.org (Fred L. Drake, Jr.)
Date: Wed, 28 Mar 2001 09:39:42 -0500 (EST)
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>
References: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>
 <LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>
Message-ID: <15041.63406.740044.659810@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Possibly unique to Netscape?  I've never seen this behavior -- although
 > sometimes I have trouble getting to *patches*, but only when logged in.

  No -- I was getting this with Konqueror as well.  Konqueror is the
KDE 2 browser/file manager.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From moshez@zadka.site.co.il  Wed Mar 28 17:02:01 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 19:02:01 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
Message-ID: <E14iJKb-0000Kf-00@darjeeling>

After labouring over the list of log messages for 2-3 days, I finally
have a tentative list of changes. I present it as a list of checkin
messages, complete with the versions. Sometimes I concatenated several
consecutive checkins into one -- "I fixed the bug", "oops, typo last
fix" and similar.

Please go over the list and see if there's anything you feel should
not go.
I'll write a short script that will dump patches files later today,
so I can start applying soon -- so please looking at it and see
I have not made any terrible mistakes.
Thanks in advance

Wholesale: Lib/tempfile.py (modulu __all__)
           Lib/sre.py
           Lib/sre_compile.py
           Lib/sre_constants.py
           Lib/sre_parse.py
           Modules/_sre.c          
----------------------------
Lib/locale.py, 1.15->1.16
setlocale(): In _locale-missing compatibility function, string
comparison should be done with != instead of "is not".
----------------------------
Lib/xml/dom/pulldom.py, 1.20->1.21

When creating an attribute node using createAttribute() or
createAttributeNS(), use the parallel setAttributeNode() or
setAttributeNodeNS() to add the node to the document -- do not assume
that setAttributeNode() will operate properly for both.
----------------------------
Python/pythonrun.c, 2.128->2.129
Fix memory leak with SyntaxError.  (The DECREF was originally hidden
inside a piece of code that was deemed reduntant; the DECREF was
unfortunately *not* redundant!)
----------------------------
Lib/quopri.py, 1.10->1.11
Strip \r as trailing whitespace as part of soft line endings.

Inspired by SF patch #408597 (Walter Dörwald): quopri, soft line
breaks and CRLF.  (I changed (" ", "\t", "\r") into " \t\r".)
----------------------------
Modules/bsddbmodule.c, 1.28->1.29
Don't raise MemoryError in keys() when the database is empty.

This fixes SF bug #410146 (python 2.1b shelve is broken).
----------------------------
Lib/fnmatch.py, 1.10->1.11

Donovan Baarda <abo@users.sourceforge.net>:
Patch to make "\" in a character group work properly.

This closes SF bug #409651.
----------------------------
Objects/complexobject.c, 2.34->2.35
SF bug [ #409448 ] Complex division is braindead
http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=5470&atid=105470
Now less braindead.  Also added test_complex.py, which doesn't test much, but
fails without this patch.
----------------------------
Modules/cPickle.c, 2.54->2.55
SF bug [ #233200 ] cPickle does not use Py_BEGIN_ALLOW_THREADS.
http://sourceforge.net/tracker/?func=detail&aid=233200&group_id=5470&atid=105470
Wrapped the fread/fwrite calls in thread BEGIN_ALLOW/END_ALLOW brackets
Afraid I hit the "delete trailing whitespace key" too!  Only two "real" sections
of code changed here.
----------------------------
Lib/xml/sax/xmlreader.py, 1.13->1.14

Import the exceptions that this module can raise.
----------------------------
Lib/xmllib.py, 1.27->1.28
Moved clearing of "literal" flag.  The flag is set in setliteral which
can be called from a start tag handler.  When the corresponding end
tag is read the flag is cleared.  However, it didn't get cleared when
the start tag was for an empty element of the type <tag .../>.  This
modification fixes the problem.
----------------------------
Modules/pwdmodule.c, 1.24->1.25
Modules/grpmodule.c, 1.14->1.15

Make sure we close the group and password databases when we are done with
them; this closes SF bug #407504.
----------------------------
Python/errors.c, 2.61->2.62
Objects/intobject.c, 2.55->2.56
Modules/timemodule.c, 2.107->2.108
Use Py_CHARMASK for ctype macros. Fixes bug #232787.
----------------------------
Modules/termios.c, 2.17->2.18

Add more protection around the VSWTC/VSWTCH, CRTSCTS, and XTABS symbols;
these can be missing on some (all?) Irix and Tru64 versions.

Protect the CRTSCTS value with a cast; this can be a larger value on
Solaris/SPARC.

This should fix SF tracker items #405092, #405350, and #405355.
----------------------------
Modules/pyexpat.c, 2.42->2.43

Wrap some long lines, use only C89 /* */ comments, and add spaces around
some operators (style guide conformance).
----------------------------
Modules/termios.c, 2.15->2.16

Revised version of Jason Tishler's patch to make this compile on Cygwin,
which does not define all the constants.

This closes SF tracker patch #404924.
----------------------------
Modules/bsddbmodule.c, 1.27->1.28

Gustavo Niemeyer <niemeyer@conectiva.com>:
Fixed recno support (keys are integers rather than strings).
Work around DB bug that cause stdin to be closed by rnopen() when the
DB file needed to exist but did not (no longer segfaults).

This closes SF tracker patch #403445.

Also wrapped some long lines and added whitespace around operators -- FLD.
----------------------------
Lib/urllib.py, 1.117->1.118
Fixing bug #227562 by calling  URLopener.http_error_default when
an invalid 401 request is being handled.
----------------------------
Python/compile.c, 2.170->2.171
Shuffle premature decref; nuke unreachable code block.
Fixes the "debug-build -O test_builtin.py and no test_b2.pyo" crash just
discussed on Python-Dev.
----------------------------
Python/import.c, 2.161->2.162
The code in PyImport_Import() tried to save itself a bit of work and
save the __builtin__ module in a static variable.  But this doesn't
work across Py_Finalise()/Py_Initialize()!  It also doesn't work when
using multiple interpreter states created with PyInterpreterState_New().

So I'm ripping out this small optimization.

This was probably broken since PyImport_Import() was introduced in
1997!  We really need a better test suite for multiple interpreter
states and repeatedly initializing.

This fixes the problems Barry reported in Demo/embed/loop.c.
----------------------------
Modules/unicodedata.c, 2.9->2.11


renamed internal functions to avoid name clashes under OpenVMS
(fixes bug #132815)
----------------------------
Modules/pyexpat.c, 2.40->2.41

Remove the old version of my_StartElementHandler().  This was conditionally
compiled only for some versions of Expat, but was no longer needed as the
new implementation works for all versions.  Keeping it created multiple
definitions for Expat 1.2, which caused compilation to fail.
----------------------------
Lib/urllib.py, 1.116->1.117
provide simple recovery/escape from apparent redirect recursion.  If the
number of entries into http_error_302 exceeds the value set for the maxtries
attribute (which defaults to 10), the recursion is exited by calling
the http_error_500 method (or if that is not defined, http_error_default).
----------------------------
Modules/posixmodule.c, 2.183->2.184

Add a few more missing prototypes to the SunOS 4.1.4 section (no SF
bugreport, just an IRC one by Marion Delgado.) These prototypes are
necessary because the functions are tossed around, not just called.
----------------------------
Modules/mpzmodule.c, 2.35->2.36

Richard Fish <rfish@users.sourceforge.net>:
Fix the .binary() method of mpz objects for 64-bit systems.

[Also removed a lot of trailing whitespace elsewhere in the file. --FLD]

This closes SF patch #103547.
----------------------------
Python/pythonrun.c, 2.121->2.122
Ugly fix for SF bug 131239 (-x flag busted).
Bug was introduced by tricks played to make .pyc files executable
via cmdline arg.  Then again, -x worked via a trick to begin with.
If anyone can think of a portable way to test -x, be my guest!
----------------------------
Makefile.pre.in, 1.15->1.16
Specify directory permissions properly.  Closes SF patch #103717.
----------------------------
install-sh, 2.3->2.4
Update install-sh using version from automake 1.4.  Closes patch #103657
and #103717.
----------------------------
Modules/socketmodule.c, 1.135->1.136
Patch #103636: Allow writing strings containing null bytes to an SSL socket
----------------------------
Modules/mpzmodule.c, 2.34->2.35
Patch #103523, to make mpz module compile with Cygwin
----------------------------
Objects/floatobject.c, 2.78->2.79
SF patch 103543 from tg@freebsd.org:
PyFPE_END_PROTECT() was called on undefined var
----------------------------
Modules/posixmodule.c, 2.181->2.182
Fix Bug #125891 - os.popen2,3 and 4 leaked file objects on Windows.
----------------------------
Python/ceval.c, 2.224->2.225
SF bug #130532:  newest CVS won't build on AIX.
Removed illegal redefinition of REPR macro; kept the one with the
argument name that isn't too easy to confuse with zero <wink>.
----------------------------
Objects/classobject.c, 2.35->2.36
Rename dubiously named local variable 'cmpfunc' -- this is also a
typedef, and at least one compiler choked on this.

(SF patch #103457, by bquinlan)
----------------------------
Modules/_cursesmodule.c, 2.47->2.50
Patch #103485 from Donn Cave: patches to make the module compile on AIX and
    NetBSD
Rename 'lines' variable to 'nlines' to avoid conflict with a macro defined
    in term.h
2001/01/28 18:10:23 akuchling Modules/_cursesmodule.c
Bug #130117: add a prototype required to compile cleanly on IRIX
   (contributed by Paul Jackson)
----------------------------
Lib/statcache.py, 1.9->1.10
SF bug #130306:  statcache.py full of thread problems.
Fixed the thread races.  Function forget_dir was also utterly Unix-specific.
----------------------------
Python/structmember.c, 1.74->1.75
SF bug http://sourceforge.net/bugs/?func=detailbug&bug_id=130242&group_id=5470
SF patch http://sourceforge.net/patch/?func=detailpatch&patch_id=103453&group_id=5470
PyMember_Set of T_CHAR always raises exception.
Unfortunately, this is a use of a C API function that Python itself never makes, so
there's no .py test I can check in to verify this stays fixed.  But the fault in the
code is obvious, and Dave Cole's patch just as obviously fixes it.
----------------------------
Modules/arraymodule.c, 2.61->2.62
Correct one-line typo, reported by yole @ SF, bug 130077.
----------------------------
Python/compile.c, 2.150->2.151
Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
parameters that contained both anonymous tuples and *arg or **arg. Ex:
def f(a, (b, c), *d): pass

Fix the symtable_params() to generate names in the right order for
co_varnames slot of code object.  Consider *arg and **arg before the
"complex" names introduced by anonymous tuples.
----------------------------
Modules/config.c.in, 1.72->1.73
_PyImport_Inittab: define the exceptions module's init function.
Fixes bug #121706.
----------------------------
Python/exceptions.c, 1.19->1.20
[Ed. -- only partial]
Leak pluggin', bug fixin' and better documentin'.  Specifically,

module__doc__: Document the Warning subclass heirarchy.

make_class(): Added a "goto finally" so that if populate_methods()
fails, the return status will be -1 (failure) instead of 0 (success).

fini_exceptions(): When decref'ing the static pointers to the
exception classes, clear out their dictionaries too.  This breaks a
cycle from class->dict->method->class and allows the classes with
unbound methods to be reclaimed.  This plugs a large memory leak in a
common Py_Initialize()/dosomething/Py_Finalize() loop.
----------------------------
Python/pythonrun.c, 2.118->2.119
Lib/atexit.py, 1.3->1.4
Bug #128475: mimetools.encode (sometimes) fails when called from a thread.
pythonrun.c:  In Py_Finalize, don't reset the initialized flag until after
the exit funcs have run.
atexit.py:  in _run_exitfuncs, mutate the list of pending calls in a
threadsafe way.  This wasn't a contributor to bug 128475, it just burned
my eyeballs when looking at that bug.
----------------------------
Modules/ucnhash.c, 1.6->1.7
gethash/cmpname both looked beyond the end of the character name.
This patch makes u"\N{x}" a bit less dependent on pure luck...
----------------------------
Lib/urllib.py, 1.112->1.113
Anonymous SF bug 129288: "The python 2.0 urllib has %%%x as a format
when quoting forbidden characters. There are scripts out there that
break with lower case, therefore I guess %%%X should be used."

I agree, so am fixing this.
----------------------------
Python/bltinmodule.c, 2.191->2.192
Fix for the bug in complex() just reported by Ping.
----------------------------
Modules/socketmodule.c, 1.130->1.131
Use openssl/*.h to include the OpenSSL header files
----------------------------
Lib/distutils/command/install.py, 1.55->1.56
Modified version of a patch from Jeremy Kloth, to make .get_outputs()
produce a list of unique filenames:
    "While attempting to build an RPM using distutils on Python 2.0,
    rpm complained about duplicate files.  The following patch fixed
    that problem.
----------------------------
Objects/unicodeobject.c, 2.72->2.73
Objects/stringobject.c, 2.96->2.97
(partial)
Added checks to prevent PyUnicode_Count() from dumping core
in case the parameters are out of bounds and fixes error handling
for .count(), .startswith() and .endswith() for the case of
mixed string/Unicode objects.

This patch adds Python style index semantics to PyUnicode_Count()
indices (including the special handling of negative indices).

The patch is an extended version of patch #103249 submitted
by Michael Hudson (mwh) on SF. It also includes new test cases.
----------------------------
Modules/posixmodule.c, 2.180->2.181
Plug memory leak.
----------------------------
Python/dynload_mac.c, 2.9->2.11
Use #if TARGET_API_MAC_CARBON to determine carbon/classic macos, not #ifdef.
Added a separate extension (.carbon.slb) for Carbon dynamic modules.
----------------------------
Modules/mmapmodule.c, 2.26->2.27
SF bug 128713:  type(mmap_object) blew up on Linux.
----------------------------
Python/sysmodule.c, 2.81->2.82
stdout is sometimes a macro; use "outf" instead.

Submitted by: Mark Favas <m.favas@per.dem.csiro.au>
----------------------------
Python/ceval.c, 2.215->2.216
Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
#127699.
----------------------------
Modules/mmapmodule.c, 2.24->2.25
Windows mmap should (as the docs probably <wink> say) create a mapping
without a name when the optional tagname arg isn't specified.  Was
actually creating a mapping with an empty string as the name.
----------------------------
Lib/shlex.py, 1.10->1.11
Patch #102953: Fix bug #125452, where shlex.shlex hangs when it
    encounters a string with an unmatched quote, by adding a check for
    EOF in the 'quotes' state.
----------------------------
Modules/binascii.c, 2.27->2.28
Address a bug in the uuencode decoder, reported bu "donut" in SF bug
#127718: '@' and '`' seem to be confused.
----------------------------
Objects/fileobject.c, 2.102->2.103
Tsk, tsk, tsk.  Treat FreeBSD the same as the other BSDs when defining
a fallback for TELL64.  Fixes SF Bug #128119.
----------------------------
Modules/posixmodule.c, 2.179->2.180
Anonymous SF bug report #128053 point out that the #ifdef for
including "tmpfile" in the posix_methods[] array is wrong -- should be
HAVE_TMPFILE, not HAVE_TMPNAM.
----------------------------
Lib/urllib.py, 1.109->1.110
Fixed bug which caused HTTPS not to work at all with string URLs
----------------------------
Objects/floatobject.c, 2.76->2.77
Fix a silly bug in float_pow.  Sorry Tim.
----------------------------
Modules/fpectlmodule.c, 2.12->2.13
Patch #103012: Update fpectlmodule for current glibc;
    The _setfpucw() function/macro doesn't seem to exist any more;
    instead there's an _FPU_SETCW macro.
----------------------------
Objects/dictobject.c, 2.71->2.72
dict_update has two boundary conditions: a.update(a) and a.update({})
Added test for second one.
----------------------------
Objects/listobject.c
fix leak
----------------------------
Lib/getopt.py, 1.11->1.13
getopt used to sort the long option names, in an attempt to simplify
the logic.  That resulted in a bug.  My previous getopt checkin repaired
the bug but left the sorting.  The solution is significantly simpler if
we don't bother sorting at all, so this checkin gets rid of the sort and
the code that relied on it.
Fix for SF bug
https://sourceforge.net/bugs/?func=detailbug&bug_id=126863&group_id=5470
"getopt long option handling broken".  Tossed the excruciating logic in
long_has_args in favor of something obviously correct.
----------------------------
Lib/curses/ascii.py, 1.3->1.4
Make isspace(chr(32)) return true
----------------------------
Lib/distutils/command/install.py, 1.54->1.55
Add forgotten initialization.  Fixes bug #120994, "Traceback with
    DISTUTILS_DEBUG set"
----------------------------
Objects/unicodeobject.c, 2.68->2.69
Fix off-by-one error in split_substring().  Fixes SF bug #122162.
----------------------------
Modules/cPickle.c, 2.53->2.54
Lib/pickle.py, 1.40->1.41
Minimal fix for the complaints about pickling Unicode objects.  (SF
bugs #126161 and 123634).

The solution doesn't use the unicode-escape encoding; that has other
problems (it seems not 100% reversible).  Rather, it transforms the
input Unicode object slightly before encoding it using
raw-unicode-escape, so that the decoding will reconstruct the original
string: backslash and newline characters are translated into their
\uXXXX counterparts.

This is backwards incompatible for strings containing backslashes, but
for some of those strings, the pickling was already broken.

Note that SF bug #123634 complains specifically that cPickle fails to
unpickle the pickle for u'' (the empty Unicode string) correctly.
This was an off-by-one error in load_unicode().

XXX Ugliness: in order to do the modified raw-unicode-escape, I've
cut-and-pasted a copy of PyUnicode_EncodeRawUnicodeEscape() into this
file that also encodes '\\' and '\n'.  It might be nice to migrate
this into the Unicode implementation and give this encoding a new name
('half-raw-unicode-escape'? 'pickle-unicode-escape'?); that would help
pickle.py too.  But right now I can't be bothered with the necessary
infrastructural changes.
----------------------------
Modules/socketmodule.c, 1.129->1.130
Adapted from a patch by Barry Scott, SF patch #102875 and SF bug
#125981: closing sockets was not thread-safe.
----------------------------
Lib/xml/dom/__init__.py, 1.4->1.6

Typo caught by /F -- thanks!
DOMException.__init__():  Remember to pass self to Exception.__init__().
----------------------------
Lib/urllib.py, 1.108->1.09
(partial)
Get rid of string functions, except maketrans() (which is *not*
obsolete!).

Fix a bug in ftpwrapper.retrfile() where somehow ftplib.error_perm was
assumed to be a string.  (The fix applies str().)

Also break some long lines and change the output from test() slightly.
----------------------------
Modules/bsddbmodule.c, 1.25->1.26
[Patch #102827] Fix for PR#119558, avoiding core dumps by checking for
malloc() returning NULL
----------------------------
Lib/site.py, 1.21->1.22
The ".pth" code knew about the layout of Python trees on unix and
windows, but not on the mac. Fixed.
----------------------------
Modules/selectmodule.c, 1.83->1.84
SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.
----------------------------
Modules/parsermodule.c, 2.58->2.59

validate_varargslist():  Fix two bugs in this function, one that affected
                         it when *args and/or **kw are used, and one when
                         they are not.

This closes bug #125375: "parser.tuple2ast() failure on valid parse tree".
----------------------------
Lib/httplib.py, 1.24->1.25
Hoepeful fix for SF bug #123924: Windows - using OpenSSL, problem with
socket in httplib.py.

The bug reports that on Windows, you must pass sock._sock to the
socket.ssl() call.  But on Unix, you must pass sock itself.  (sock is
a wrapper on Windows but not on Unix; the ssl() call wants the real
socket object, not the wrapper.)

So we see if sock has an _sock attribute and if so, extract it.

Unfortunately, the submitter of the bug didn't confirm that this patch
works, so I'll just have to believe it (can't test it myself since I
don't have OpenSSL on Windows set up, and that's a nontrivial thing I
believe).
----------------------------
Python/getargs.c, 2.50->2.51
vgetargskeywords(): Patch for memory leak identified in bug #119862.
----------------------------
Lib/ConfigParser.py, 1.23->1.24

remove_option():  Use the right variable name for the option name!

This closes bug #124324.
----------------------------
Lib/filecmp.py, 1.6->1.7
Call of _cmp had wrong number of paramereters.
Fixed definition of _cmp.
----------------------------
Python/compile.c, 2.143->2.144
Plug a memory leak in com_import_stmt(): the tuple created to hold the
"..." in "from M import ..." was never DECREFed.  Leak reported by
James Slaughter and nailed by Barry, who also provided an earlier
version of this patch.
----------------------------
Objects/stringobject.c, 2.92->2.93
SF patch #102548, fix for bug #121013, by mwh@users.sourceforge.net.

Fixes a typo that caused "".join(u"this is a test") to dump core.
----------------------------
Python/marshal.c, 1.57->1.58
Python/compile.c, 2.142->2.143
SF bug 119622:  compile errors due to redundant atof decls.  I don't understand
the bug report (for details, look at it), but agree there's no need for Python
to declare atof itself:  we #include stdlib.h, and ANSI C sez atof is declared
there already.
----------------------------
Lib/webbrowser.py, 1.4->1.5
Typo for Mac code, fixing SF bug 12195.
----------------------------
Objects/fileobject.c, 2.91->2.92
Added _HAVE_BSDI and __APPLE__ to the list of platforms that require a
hack for TELL64()...  Sounds like there's something else going on
really.  Does anybody have a clue I can buy?
----------------------------
Python/thread_cthread.h, 2.13->2.14
Fix syntax error.  Submitted by Bill Bumgarner.  Apparently this is
still in use, for Apple Mac OSX.
----------------------------
Modules/arraymodule.c, 2.58->2.59
Fix for SF bug 117402, crashes on str(array) and repr(array).  This was an
unfortunate consequence of somebody switching from PyArg_Parse to
PyArg_ParseTuple but without changing the argument from a NULL to a tuple.
----------------------------
Lib/smtplib.py, 1.29->1.30
SMTP.connect(): If the socket.connect() raises a socket.error, be sure
to call self.close() to reclaim some file descriptors, the reraise the
exception.  Closes SF patch #102185 and SF bug #119833.
----------------------------
Objects/rangeobject.c, 2.20->2.22

Fixed support for containment test when a negative step is used; this
*really* closes bug #121965.

Added three attributes to the xrange object: start, stop, and step.  These
are the same as for the slice objects.

In the containment test, get the boundary condition right.  ">" was used
where ">=" should have been.

This closes bug #121965.
----------------------------
configure.in, 1.177->1.178
Fix for SF bug #117606:
  - when compiling with GCC on Solaris, use "$(CC) -shared" instead
    of "$(CC) -G" to generate .so files
  - when compiling with GCC on any platform, add "-fPIC" to OPT
    (without this, "$(CC) -shared" dies horribly)
----------------------------
configure.in, 1.175->1.176

Make sure the Modules/ directory is created before writing Modules/Setup.
----------------------------
Modules/_cursesmodule.c, 2.39->2.40
Patch from Randall Hopper to fix PR #116172, "curses module fails to
build on SGI":
* Check for 'sgi' preprocessor symbol, not '__sgi__'
* Surround individual character macros with #ifdef's, instead of making them
  all rely on STRICT_SYSV_CURSES
----------------------------
Modules/_tkinter.c, 1.114->1.115
Do not release unallocated Tcl objects. Closes #117278 and  #117167.
----------------------------
Python/dynload_shlib.c, 2.6->2.7
Patch 102114, Bug 11725.  On OpenBSD (but apparently not on the other
BSDs) you need a leading underscore in the dlsym() lookup name.
----------------------------
Lib/UserString.py, 1.6->1.7
Fix two typos in __imul__.  Closes Bug #117745.
----------------------------
Lib/mailbox.py, 1.25->1.26

Maildir.__init__():  Make sure self.boxes is set.

This closes SourceForge bug #117490.
----------------------------

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From tim.one@home.com  Wed Mar 28 17:51:27 2001
From: tim.one@home.com (Tim Peters)
Date: Wed, 28 Mar 2001 12:51:27 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>

Whew!  What a thankless job, Moshe -- thank you!  Comments on a few:

> Objects/complexobject.c, 2.34->2.35
> SF bug [ #409448 ] Complex division is braindead
> http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=547
> 0&atid=105470

As we've seen, that caused a std test to fail on Mac Classic, due to an
accident of fused f.p. code generation and what sure looks like a PowerPC HW
bug.  It can also change numeric results slightly due to different order of
f.p. operations on any platform.  So this would not be a "pure bugfix" in
Aahz's view, despite that it's there purely to fix bugs <wink>.

> Modules/selectmodule.c, 1.83->1.84
> SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.

I'm afraid that boosting implementation limits has to be considered "a
feature".

> Objects/rangeobject.c, 2.20->2.22
>
> Fixed support for containment test when a negative step is used; this
> *really* closes bug #121965.
>
> Added three attributes to the xrange object: start, stop, and step.
> These are the same as for the slice objects.
>
> In the containment test, get the boundary condition right.  ">" was used
> where ">=" should have been.
>
> This closes bug #121965.

This one Aahz singled out previously as a canonical example of a patch he
would *not* include, because adding new attributes seemed potentially
disruptive to him (but why?  maybe someone was depending on the precise value
of len(dir(xrange(42)))?).



From aahz@pobox.com (Aahz Maruch)  Wed Mar 28 17:57:49 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Wed, 28 Mar 2001 09:57:49 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com> from "Tim Peters" at Mar 28, 2001 12:51:27 PM
Message-ID: <200103281757.MAA04464@panix3.panix.com>

Tim:
> Moshe:
>>
>> Fixed support for containment test when a negative step is used; this
>> *really* closes bug #121965.
>>
>> Added three attributes to the xrange object: start, stop, and step.
>> These are the same as for the slice objects.
>>
>> In the containment test, get the boundary condition right.  ">" was used
>> where ">=" should have been.
>>
>> This closes bug #121965.
> 
> This one Aahz singled out previously as a canonical example of a
> patch he would *not* include, because adding new attributes seemed
> potentially disruptive to him (but why? maybe someone was depending on
> the precise value of len(dir(xrange(42)))?).

I'm not sure about this, but it seems to me that the attribute change
will generate a different .pyc.  If I'm wrong about that, this patch
as-is is fine with me; otherwise, I'd lobby to use the containment fix
but not the attributes (assuming we're willing to use part of a patch).

>From my POV, it's *real* important that .pyc files be portable between
bugfix releases, and so far I haven't seen any argument against that
goal.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Boost the stock market -- fire someone"


From mwh21@cam.ac.uk  Wed Mar 28 18:18:28 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 28 Mar 2001 19:18:28 +0100
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Moshe Zadka's message of "Wed, 28 Mar 2001 19:02:01 +0200"
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez@zadka.site.co.il> writes:

> After labouring over the list of log messages for 2-3 days, I finally
> have a tentative list of changes. I present it as a list of checkin
> messages, complete with the versions. Sometimes I concatenated several
> consecutive checkins into one -- "I fixed the bug", "oops, typo last
> fix" and similar.
> 
> Please go over the list and see if there's anything you feel should
> not go.

I think there are some that don't apply to 2.0.1:

> Python/pythonrun.c, 2.128->2.129
> Fix memory leak with SyntaxError.  (The DECREF was originally hidden
> inside a piece of code that was deemed reduntant; the DECREF was
> unfortunately *not* redundant!)

and

> Python/compile.c, 2.150->2.151
> Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
> parameters that contained both anonymous tuples and *arg or **arg. Ex:
> def f(a, (b, c), *d): pass
> 
> Fix the symtable_params() to generate names in the right order for
> co_varnames slot of code object.  Consider *arg and **arg before the
> "complex" names introduced by anonymous tuples.

aren't meaningful without the nested scopes stuff.  But I guess you'll
notice pretty quickly if I'm right...

Otherwise, general encouragement!  Please keep it up.

Cheers,
M.

-- 
  languages shape the way we think, or don't.
                                        -- Erik Naggum, comp.lang.lisp



From jeremy@alum.mit.edu  Wed Mar 28 17:07:10 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:10 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6718.542630.936641@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/ceval.c, 2.224->2.225
> SF bug #130532:  newest CVS won't build on AIX.
> Removed illegal redefinition of REPR macro; kept the one with the
> argument name that isn't too easy to confuse with zero <wink>.

The REPR macro was not present in 2.0 and is no longer present in 2.1.

Jeremy


From guido@digicool.com  Wed Mar 28 18:21:18 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 13:21:18 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 09:57:49 PST."
 <200103281757.MAA04464@panix3.panix.com>
References: <200103281757.MAA04464@panix3.panix.com>
Message-ID: <200103281821.NAA10019@cj20424-a.reston1.va.home.com>

> > This one Aahz singled out previously as a canonical example of a
> > patch he would *not* include, because adding new attributes seemed
> > potentially disruptive to him (but why? maybe someone was depending on
> > the precise value of len(dir(xrange(42)))?).
> 
> I'm not sure about this, but it seems to me that the attribute change
> will generate a different .pyc.  If I'm wrong about that, this patch
> as-is is fine with me; otherwise, I'd lobby to use the containment fix
> but not the attributes (assuming we're willing to use part of a patch).

Adding attributes to xrange() can't possibly change the .pyc files.

> >From my POV, it's *real* important that .pyc files be portable between
> bugfix releases, and so far I haven't seen any argument against that
> goal.

Agreed with the goal, of course.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jeremy@alum.mit.edu  Wed Mar 28 17:07:03 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:03 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6711.20698.535298@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/compile.c, 2.150->2.151
> Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
> parameters that contained both anonymous tuples and *arg or **arg. Ex:
> def f(a, (b, c), *d): pass
>
> Fix the symtable_params() to generate names in the right order for
> co_varnames slot of code object.  Consider *arg and **arg before the
> "complex" names introduced by anonymous tuples.

I believe this bug report was only relevant for the compiler w/
symbol table pass introduced in Python 2.1.

Jeremy


From jeremy@alum.mit.edu  Wed Mar 28 17:07:22 2001
From: jeremy@alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:22 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/ceval.c, 2.215->2.216
> Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
> #127699.

fast_cfunction was not present in Python 2.0.  The CALL_FUNCTION
implementation in ceval.c was rewritten for Python 2.1.

Jeremy



From moshez@zadka.site.co.il  Wed Mar 28 18:22:27 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:22:27 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>
Message-ID: <E14iKaR-0000d5-00@darjeeling>

On Wed, 28 Mar 2001 12:51:27 -0500, "Tim Peters" <tim.one@home.com> wrote:

> Whew!  What a thankless job, Moshe -- thank you!

I just wanted to keep this in to illustrate the ironical nature of the
universe ;-)

>  Comments on a few:
> 
> > Objects/complexobject.c, 2.34->2.35
> > SF bug [ #409448 ] Complex division is braindead
> > http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=547
> > 0&atid=105470
> 
> As we've seen, that caused a std test to fail on Mac Classic

OK, it's dead.

> > Modules/selectmodule.c, 1.83->1.84
> > SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.
> 
> I'm afraid that boosting implementation limits has to be considered "a
> feature".

You're right. Killed.

> > Objects/rangeobject.c, 2.20->2.22
> >
> > Fixed support for containment test when a negative step is used; this
> > *really* closes bug #121965.
> >
> > Added three attributes to the xrange object: start, stop, and step.
> > These are the same as for the slice objects.
> >
> > In the containment test, get the boundary condition right.  ">" was used
> > where ">=" should have been.
> >
> > This closes bug #121965.
> 
> This one Aahz singled out previously as a canonical example of a patch he
> would *not* include, because adding new attributes seemed potentially
> disruptive to him (but why?  maybe someone was depending on the precise value
> of len(dir(xrange(42)))?).

You're right, I forgot to (partial) this.
(partial)'s mean, BTW, that only part of the patch goes.
I do want to fix the containment, and it's in the same version upgrade.
More work for me! Yay!

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From moshez@zadka.site.co.il  Wed Mar 28 18:25:21 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:25:21 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iKdF-0000eg-00@darjeeling>

On Wed, 28 Mar 2001, Jeremy Hylton <jeremy@alum.mit.edu> wrote:

> > Python/ceval.c, 2.215->2.216
> > Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
> > #127699.
> 
> fast_cfunction was not present in Python 2.0.  The CALL_FUNCTION
> implementation in ceval.c was rewritten for Python 2.1.

Thanks, dropped. Ditto for the REPR and the *arg parsing.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From moshez@zadka.site.co.il  Wed Mar 28 18:30:31 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:30:31 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <200103281757.MAA04464@panix3.panix.com>
References: <200103281757.MAA04464@panix3.panix.com>
Message-ID: <E14iKiF-0000fW-00@darjeeling>

On Wed, 28 Mar 2001 09:57:49 -0800 (PST), <aahz@panix.com> wrote:
 
> From my POV, it's *real* important that .pyc files be portable between
> bugfix releases, and so far I haven't seen any argument against that
> goal.

It is a release-critical goal, yes.
It's not an argument against adding attributes to range objects.
However, adding attributes to range objects is a no-go, and it got in by
mistake.

The list should be, of course, treated as a first rough draft. I'll post a 
more complete list to p-d and p-l after it's hammered out a bit. Since
everyone who checkin stuff is on this mailing list, I wanted people
to review their own checkins first, to see I'm not making complete blunders.

Thanks a lot to Tim, Jeremy and /F for their feedback, by the way.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From aahz@pobox.com (Aahz Maruch)  Wed Mar 28 19:06:15 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Wed, 28 Mar 2001 11:06:15 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 28, 2001 01:21:18 PM
Message-ID: <200103281906.OAA10976@panix6.panix.com>

Guido:
>Aahz:
>>
>> I'm not sure about this, but it seems to me that the attribute change
>> will generate a different .pyc.  If I'm wrong about that, this patch
>> as-is is fine with me; otherwise, I'd lobby to use the containment fix
>> but not the attributes (assuming we're willing to use part of a patch).
> 
> Adding attributes to xrange() can't possibly change the .pyc files.

Okay, chalk another one up to ignorance.  Another thought occurred to me
in the shower, though: would this change the pickle of xrange()?  If yes,
should pickle changes also be prohibited in bugfix releases (in the PEP)?
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Boost the stock market -- fire someone"


From guido@digicool.com  Wed Mar 28 19:12:59 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 14:12:59 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 19:02:01 +0200."
 <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>

> After labouring over the list of log messages for 2-3 days, I finally
> have a tentative list of changes. I present it as a list of checkin
> messages, complete with the versions. Sometimes I concatenated several
> consecutive checkins into one -- "I fixed the bug", "oops, typo last
> fix" and similar.

Good job, Moshe!  The few where I had doubts have already been covered
by others.  As the saying goes, "check it in" :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fredrik@effbot.org  Wed Mar 28 19:21:46 2001
From: fredrik@effbot.org (Fredrik Lundh)
Date: Wed, 28 Mar 2001 21:21:46 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
References: <200103281906.OAA10976@panix6.panix.com>
Message-ID: <018601c0b7bc$55d08f00$e46940d5@hagrid>

> Okay, chalk another one up to ignorance.  Another thought occurred to me
> in the shower, though: would this change the pickle of xrange()?  If yes,
> should pickle changes also be prohibited in bugfix releases (in the PEP)?

from the why-dont-you-just-try-it department:

Python 2.0 (#8, Jan 29 2001, 22:28:01) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import pickle
>>> data = xrange(10)
>>> dir(data)
['tolist']
>>> pickle.dumps(data)
Traceback (most recent call last):
...
pickle.PicklingError: can't pickle 'xrange' object: xrange(10)

Python 2.1b2 (#12, Mar 22 2001, 15:15:01) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import pickle
>>> data = xrange(10)
>>> dir(data)
['start', 'step', 'stop', 'tolist']
>>> pickle.dumps(data)
Traceback (most recent call last):
...
pickle.PicklingError: can't pickle 'xrange' object: xrange(10)

Cheers /F



From aahz@pobox.com (Aahz Maruch)  Wed Mar 28 19:17:59 2001
From: aahz@pobox.com (Aahz Maruch) (aahz@pobox.com (Aahz Maruch))
Date: Wed, 28 Mar 2001 11:17:59 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <no.id> from "Fredrik Lundh" at Mar 28, 2001 09:21:46 PM
Message-ID: <200103281917.OAA12358@panix6.panix.com>

> > Okay, chalk another one up to ignorance.  Another thought occurred to me
> > in the shower, though: would this change the pickle of xrange()?  If yes,
> > should pickle changes also be prohibited in bugfix releases (in the PEP)?
> 
> from the why-dont-you-just-try-it department:

You're right, I should have tried it.  I didn't because my shell account
still hasn't set up Python 2.0 as the default version and I haven't yet
set myself up to test beta/patch/CVS releases.  <sigh>  The more I
learn, the more ignorant I feel....
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz@pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Boost the stock market -- fire someone"


From guido@digicool.com  Wed Mar 28 19:18:26 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 14:18:26 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 11:06:15 PST."
 <200103281906.OAA10976@panix6.panix.com>
References: <200103281906.OAA10976@panix6.panix.com>
Message-ID: <200103281918.OAA10296@cj20424-a.reston1.va.home.com>

> > Adding attributes to xrange() can't possibly change the .pyc files.
> 
> Okay, chalk another one up to ignorance.  Another thought occurred to me
> in the shower, though: would this change the pickle of xrange()?  If yes,
> should pickle changes also be prohibited in bugfix releases (in the PEP)?

I agree that pickle changes should be prohibited, although I want to
make an exception for the fix to pickling of Unicode objects (which is
pretty broken in 2.0).

That said, xrange() objects can't be pickled, so it's a non-issue. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)


From jack@oratrix.nl  Wed Mar 28 19:59:26 2001
From: jack@oratrix.nl (Jack Jansen)
Date: Wed, 28 Mar 2001 21:59:26 +0200 (MET DST)
Subject: [Python-Dev] MacPython 2.1b2 available
Message-ID: <20010328195926.47261EA11F@oratrix.oratrix.nl>

MacPython 2.1b2 is available for download. Get it via
http://www.cwi.nl/~jack/macpython.html .

New in this version:
- A choice of Carbon or Classic runtime, so runs on anything between
  MacOS 8.1 and MacOS X
- Distutils support for easy installation of extension packages
- BBedit language plugin
- All the platform-independent Python 2.1 mods
- New version of Numeric
- Lots of bug fixes
- Choice of normal and active installer

Please send feedback on this release to pythonmac-sig@python.org,
where all the maccies hang out.

Enjoy,


--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen@oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++


From moshez@zadka.site.co.il  Wed Mar 28 19:58:23 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 21:58:23 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>
References: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iM5H-0000rB-00@darjeeling>

On 28 Mar 2001 19:18:28 +0100, Michael Hudson <mwh21@cam.ac.uk> wrote:
 
> I think there are some that don't apply to 2.0.1:
> 
> > Python/pythonrun.c, 2.128->2.129
> > Fix memory leak with SyntaxError.  (The DECREF was originally hidden
> > inside a piece of code that was deemed reduntant; the DECREF was
> > unfortunately *not* redundant!)

OK, dead.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From moshez@zadka.site.co.il  Wed Mar 28 20:05:38 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 22:05:38 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>
References: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iMCI-0000s2-00@darjeeling>

On Wed, 28 Mar 2001 14:12:59 -0500, Guido van Rossum <guido@digicool.com> wrote:
 
> The few where I had doubts have already been covered
> by others.  As the saying goes, "check it in" :-)

I'm afraid it will still take time to generate the patches, apply
them, test them, etc....
I was hoping to create a list of patches tonight, but I'm a bit too
dead. I'll post to p-l tommorow with the new list of patches.

PS.
Tools/script/logmerge.py loses version numbers. That pretty much
sucks for doing the work I did, even though the raw log was worse --
I ended up cross referencing and finding version numbers by hand.
If anyone doesn't have anything better to do, here's a nice gift
for 2.1 ;-)

PPS.
Most of the work I can do myself just fine. There are a couple of places
where I could *really* need some help. One of those is testing fixes
for bugs which manifest on exotic OSes (and as far as I'm concerned, 
Windows is as exotic as they come <95 wink>.) Please let me know if
you're interested in testing patches for them.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From guido@digicool.com  Wed Mar 28 20:19:19 2001
From: guido@digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 15:19:19 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 22:05:38 +0200."
 <E14iMCI-0000s2-00@darjeeling>
References: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>, <E14iJKb-0000Kf-00@darjeeling>
 <E14iMCI-0000s2-00@darjeeling>
Message-ID: <200103282019.PAA10717@cj20424-a.reston1.va.home.com>

> > The few where I had doubts have already been covered
> > by others.  As the saying goes, "check it in" :-)
> 
> I'm afraid it will still take time to generate the patches, apply
> them, test them, etc....

Understood!  There's no immediate hurry (except for the fear that you
might be distracted by real work :-).

> I was hoping to create a list of patches tonight, but I'm a bit too
> dead. I'll post to p-l tommorow with the new list of patches.

You're doing great.  Take some rest.

> PS.
> Tools/script/logmerge.py loses version numbers. That pretty much
> sucks for doing the work I did, even though the raw log was worse --
> I ended up cross referencing and finding version numbers by hand.
> If anyone doesn't have anything better to do, here's a nice gift
> for 2.1 ;-)

Yes, it sucks.  Feel free to check in a change into the 2.1 tree!

> PPS.
> Most of the work I can do myself just fine. There are a couple of places
> where I could *really* need some help. One of those is testing fixes
> for bugs which manifest on exotic OSes (and as far as I'm concerned, 
> Windows is as exotic as they come <95 wink>.) Please let me know if
> you're interested in testing patches for them.

PL will volunteer Win98se and Win2000 testing.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Wed Mar 28 20:25:19 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 28 Mar 2001 22:25:19 +0200
Subject: [Python-Dev] List of Patches to Go in 2.0.1
Message-ID: <200103282025.f2SKPJj04355@mira.informatik.hu-berlin.de>

> This one Aahz singled out previously as a canonical example of a patch he
> would *not* include, because adding new attributes seemed potentially
> disruptive to him (but why?  maybe someone was depending on the precise value
> of len(dir(xrange(42)))?).

There is a patch on SF which backports that change without introducing
these attributes in the 2.0.1 class.

Regards,
Martin



From martin@loewis.home.cs.tu-berlin.de  Wed Mar 28 20:39:20 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 28 Mar 2001 22:39:20 +0200
Subject: [Python-Dev] List of Patches to Go in 2.0.1
Message-ID: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>

> Modules/_tkinter.c, 1.114->1.115
> Do not release unallocated Tcl objects. Closes #117278 and  #117167.

That is already committed to the maintenance branch.

> Modules/pyexpat.c, 2.42->2.43

There is a number of memory leaks which I think should get fixed,
inside the changes:

2.33->2.34
2.31->2.32 (garbage collection, and missing free calls)

I can produce a patch that only has those changes.

Martin


From michel@digicool.com  Wed Mar 28 21:00:57 2001
From: michel@digicool.com (Michel Pelletier)
Date: Wed, 28 Mar 2001 13:00:57 -0800 (PST)
Subject: [Python-Dev] Updated, shorter PEP 245
Message-ID: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>

Hi folks,

I have broken PEP 245 into two different PEPs, the first, which is now PEP
245, covers only the syntax and the changes to the Python language.  It is
much shorter and sweeter that the old one.

The second one, yet to have a number or to be totally polished off,
describes my proposed interface *model* based on the Zope interfaces work
and the previous incarnation of PEP 245.  This next PEP is totally
independent of PEP 245, and can be accepted or rejected independent of the
syntax if a different model is desired.

In fact, Amos Latteier has proposed to me a different, simpler, though
less functional model that would make an excellent alternative.  I'll
encourage him to formalize it.  Or would it be acceptable to offer two
possible models in the same PEP?

Finally, I forsee a third PEP to cover issues beyond the model, like type
checking, interface enforcement, and formalizing well-known python
"protocols" as interfaces.  That's a work for later consideration, that is
also independent of the previous two PEPs.

The *new* PEP 245 can be found at the following link:

http://www.zope.org/Members/michel/MyWiki/InterfacesPEP/PEP245.txt

Enjoy, and please feel free to comment.

-Michel




From michel@digicool.com  Wed Mar 28 21:12:09 2001
From: michel@digicool.com (Michel Pelletier)
Date: Wed, 28 Mar 2001 13:12:09 -0800 (PST)
Subject: [Python-Dev] Updated, shorter PEP 245
In-Reply-To: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>
Message-ID: <Pine.LNX.4.32.0103281311420.3864-100000@localhost.localdomain>


On Wed, 28 Mar 2001, Michel Pelletier wrote:

> The *new* PEP 245 can be found at the following link:
>
> http://www.zope.org/Members/michel/MyWiki/InterfacesPEP/PEP245.txt

It's also available in a formatted version at the python dev site:

http://python.sourceforge.net/peps/pep-0245.html

-Michel



From moshez@zadka.site.co.il  Wed Mar 28 21:10:14 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 23:10:14 +0200
Subject: [Python-Dev] Re: List of Patches to Go in 2.0.1
In-Reply-To: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>
References: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>
Message-ID: <E14iNCo-00014t-00@darjeeling>

On Wed, 28 Mar 2001, "Martin v. Loewis" <martin@loewis.home.cs.tu-berlin.de> wrote:

> > Modules/_tkinter.c, 1.114->1.115
> > Do not release unallocated Tcl objects. Closes #117278 and  #117167.
> 
> That is already committed to the maintenance branch.

Thanks, deleted.

> > Modules/pyexpat.c, 2.42->2.43
> 
> There is a number of memory leaks which I think should get fixed,
> inside the changes:
> 
> 2.33->2.34
> 2.31->2.32 (garbage collection, and missing free calls)
> 
> I can produce a patch that only has those changes.

Yes, that would be very helpful. 
Please assign it to me if you post it at SF.
The problem I had with the XML code (which had a couple of other fixed
bugs) was that it was always "resynced with PyXML tree", which seemed
to me too large to be safe...
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From barry@digicool.com  Wed Mar 28 21:14:42 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Wed, 28 Mar 2001 16:14:42 -0500
Subject: [Python-Dev] Updated, shorter PEP 245
References: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>
Message-ID: <15042.21570.617105.910629@anthem.wooz.org>

>>>>> "MP" == Michel Pelletier <michel@digicool.com> writes:

    MP> In fact, Amos Latteier has proposed to me a different,
    MP> simpler, though less functional model that would make an
    MP> excellent alternative.  I'll encourage him to formalize it.
    MP> Or would it be acceptable to offer two possible models in the
    MP> same PEP?

It would probably be better to have them as two separate (competing)
PEPs.

-Barry


From mwh21@cam.ac.uk  Wed Mar 28 22:55:36 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: 28 Mar 2001 23:55:36 +0100
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: "Tim Peters"'s message of "Wed, 21 Mar 2001 17:30:52 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>
Message-ID: <m3g0fxcxlj.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one@home.com> writes:

> I'm calling this one a bug in doctest.py, and will fix it there.  Ugly:
> since we can longer rely on list.sort() not raising exceptions, it won't be
> enough to replace the existing
> 
>     for k, v in dict.items():
> 
> with
> 
>     items = dict.items()
>     items.sort()
>     for k, v in items:

Hmm, reading through these posts for summary purposes, it occurs to me
that this *is* safe, 'cause item 0 of the tuples will always be
distinct strings, and as equal-length tuples are compared
lexicographically, the values will never actually be compared!

pointless-ly y'rs
M.

-- 
93. When someone says "I want a programming language in which I
    need only say what I wish done," give him a lollipop.
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html



From mwh21@cam.ac.uk  Thu Mar 29 12:06:00 2001
From: mwh21@cam.ac.uk (Michael Hudson)
Date: Thu, 29 Mar 2001 13:06:00 +0100 (BST)
Subject: [Python-Dev] python-dev summary, 2001-03-15 - 2001-03-29
Message-ID: <Pine.LNX.4.10.10103291304110.866-100000@localhost.localdomain>

 This is a summary of traffic on the python-dev mailing list between
 Mar 15 and Mar 28 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list@python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the fourth summary written by Michael Hudson.
 Summaries are archived at:

  <http://starship.python.net/crew/mwh/summaries/>

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 410

    50 |                 [|]                                    
       |                 [|]                                    
       |                 [|]                                    
       |                 [|]                                    
    40 |                 [|]                                    
       |                 [|] [|]                                
       | [|]             [|] [|]                                
       | [|]             [|] [|] [|]     [|]                    
    30 | [|]             [|] [|] [|]     [|]                    
       | [|]             [|] [|] [|]     [|]                    
       | [|]             [|] [|] [|]     [|] [|]                
       | [|]         [|] [|] [|] [|]     [|] [|]             [|]
    20 | [|] [|]     [|] [|] [|] [|]     [|] [|]             [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]             [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
    10 | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]
     0 +-044-024-013-029-059-046-040-022-040-031-007-019-008-028
        Thu 15| Sat 17| Mon 19| Wed 21| Fri 23| Sun 25| Tue 27|
            Fri 16  Sun 18  Tue 20  Thu 22  Sat 24  Mon 26  Wed 28

 Bug-fixing for 2.1 remained a priority for python-dev this fortnight
 which saw the release of 2.1b2 last Friday.


    * Python 2.0.1 *

 Aahz posted his first draft of PEP 6, outlining the process by which
 maintenance releases of Python should be made.

  <http://python.sourceforge.net/peps/pep-0006.html>

 Moshe Zadka has volunteered to be the "Patch Czar" for Python 2.0.1.

  <http://mail.python.org/pipermail/python-dev/2001-March/013952.html>

 I'm sure we can all join in the thanks due to Moshe for taking up
 this tedious but valuable job!


    * Simple Generator implementations *

 Neil Schemenauer posted links to a couple of "simple" implementations
 of generators (a.k.a. resumable functions) that do not depend on the
 stackless changes going in.

  <http://mail.python.org/pipermail/python-dev/2001-March/013648.html>
  <http://mail.python.org/pipermail/python-dev/2001-March/013666.html>

 These implementations have the advantage that they might be
 applicable to Jython, something that sadly cannot be said of
 stackless.
 

    * portable file-system stuff *

 The longest thread of the summary period started off with a request
 for a portable way to find out free disk space:

  <http://mail.python.org/pipermail/python-dev/2001-March/013706.html>

 After a slightly acrimonious debate about the nature of Python
 development, /F produced a patch that implements partial support for
 os.statvfs on Windows:

  <http://sourceforge.net/tracker/index.php?func=detail&aid=410547&group_id=5470&atid=305470>

 which can be used to extract such information.

 A side-product of this discussion was the observation that although
 Python has a module that does some file manipulation, shutil, it is
 far from being as portable as it might be - in particular it fails
 miserably on the Mac where it ignores resource forks.  Greg Ward then
 pointed out that he had to implement cross-platform file copying for
 the distutils

  <http://mail.python.org/pipermail/python-dev/2001-March/013962.html>

 so perhaps all that needs to be done is for this stuff to be moved
 into the core.  It seems very unlikely there will be much movement
 here before 2.2.



From fdrake@cj42289-a.reston1.va.home.com  Thu Mar 29 13:01:26 2001
From: fdrake@cj42289-a.reston1.va.home.com (Fred Drake)
Date: Thu, 29 Mar 2001 08:01:26 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010329130126.C3EED2888E@cj42289-a.reston1.va.home.com>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


For Peter Funk:  Removed space between function/method/class names and
their parameter lists for easier cut & paste.  This is a *tentative*
change; feedback is appreciated at python-docs@python.org.

Also added some new information on integrating with the cycle detector
and some additional C APIs introduced in Python 2.1 (PyObject_IsInstance(),
PyObject_IsSubclass()).



From dalke@acm.org  Thu Mar 29 23:07:17 2001
From: dalke@acm.org (Andrew Dalke)
Date: Thu, 29 Mar 2001 16:07:17 -0700
Subject: [Python-Dev] 'mapping' in weakrefs unneeded?
Message-ID: <015101c0b8a5$00c37ce0$d795fc9e@josiah>

Hello all,

  I'm starting to learn how to use weakrefs.  I'm curious
about the function named 'mapping'.  It is implemented as:

> def mapping(dict=None,weakkeys=0):
>     if weakkeys:
>         return WeakKeyDictionary(dict)
>     else:
>         return WeakValueDictionary(dict)

Why is this a useful function?  Shouldn't people just call
WeakKeyDictionary and WeakValueDictionary directly instead
of calling mapping with a parameter to specify which class
to construct?

If anything, this function is very confusing.  Take the
associated documentation as a case in point:

> mapping([dict[, weakkeys=0]]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The values from dict must be weakly referencable; if any
> values which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> If the weakkeys argument is not given or zero, the values in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> value exists anymore. 
>
> If the weakkeys argument is nonzero, the keys in the
> dictionary are weak, i.e. the entry in the dictionary is
> discarded when the last strong reference to the key is
> discarded. 

As far as I can tell, this documentation is wrong, or at
the very least confusing.  For example, it says:
> The values from dict must be weakly referencable

but when the weakkeys argument is nonzero,
> the keys in the dictionary are weak

So must both keys and values be weak?  Or only the keys?
I hope the latter since there are cases I can think of
where I want the keys to be weak and the values be types,
hence non-weakreferencable.

Wouldn't it be better to remove the 'mapping' function and
only have the WeakKeyDictionary and WeakValueDictionary.
In which case the documentation becomes:

> WeakValueDictionary([dict]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The values from dict must be weakly referencable; if any
> values which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> The values in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> value exists anymore. 

> WeakKeyDictionary([dict]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The keys from dict must be weakly referencable; if any
> keys which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> The keys in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> key exists anymore. 

Easier to read and to see the parallels between the two
styles, IMHO of course.

I am not on this list though I will try to read the
archives online for the next couple of days.  Please
CC me about any resolution to this topic.

Sincerely,

                    Andrew
                    dalke@acm.org




From martin@loewis.home.cs.tu-berlin.de  Fri Mar 30 07:55:59 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 30 Mar 2001 09:55:59 +0200
Subject: [Python-Dev] Assigning to __debug__
Message-ID: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>

After the recent change that assignments to __debug__ are disallowed,
I noticed that IDLE stops working (see SF bug report), since it was
assigning to __debug__. 

Simply commenting-out the assignment (to zero) did no good: Inside the
__debug__ blocks, IDLE would try to perform print statements, which
would write to the re-assigned sys.stdout, which would invoke the code
that had the __debug__, which would give up thanks to infinite
recursion. So essentially, you either have to remove the __debug__
blocks, or rewrite them to writing to save_stdout - in which case all
the ColorDelegator debug message appear in the terminal window.

So anybody porting to Python 2.1 will essentially have to remove all
__debug__ blocks that were previously disabled by assigning 0 to
__debug__. I think this is undesirable.

As I recall, in the original description of __debug__, being able to
assign to it was reported as one of its main features, so that you
still had a run-time option (unless the interpreter was running with
-O, which eliminates the __debug__ blocks).

So in short, I think this change should be reverted.

Regards,
Martin

P.S. What was the motivation for that change, anyway?


From mal@lemburg.com  Fri Mar 30 08:06:42 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 10:06:42 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
Message-ID: <3AC43E92.C269D98D@lemburg.com>

"Martin v. Loewis" wrote:
> 
> After the recent change that assignments to __debug__ are disallowed,
> I noticed that IDLE stops working (see SF bug report), since it was
> assigning to __debug__.
> 
> Simply commenting-out the assignment (to zero) did no good: Inside the
> __debug__ blocks, IDLE would try to perform print statements, which
> would write to the re-assigned sys.stdout, which would invoke the code
> that had the __debug__, which would give up thanks to infinite
> recursion. So essentially, you either have to remove the __debug__
> blocks, or rewrite them to writing to save_stdout - in which case all
> the ColorDelegator debug message appear in the terminal window.
> 
> So anybody porting to Python 2.1 will essentially have to remove all
> __debug__ blocks that were previously disabled by assigning 0 to
> __debug__. I think this is undesirable.
> 
> As I recall, in the original description of __debug__, being able to
> assign to it was reported as one of its main features, so that you
> still had a run-time option (unless the interpreter was running with
> -O, which eliminates the __debug__ blocks).
> 
> So in short, I think this change should be reverted.

+1 from here... 

I use the same concept for debugging: during development I set 
__debug__ to 1, in production I change it to 0 (python -O does this
for me as well).

> Regards,
> Martin
> 
> P.S. What was the motivation for that change, anyway?
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@digicool.com  Fri Mar 30 13:30:18 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 08:30:18 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 09:55:59 +0200."
 <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
Message-ID: <200103301330.IAA23144@cj20424-a.reston1.va.home.com>

> After the recent change that assignments to __debug__ are disallowed,
> I noticed that IDLE stops working (see SF bug report), since it was
> assigning to __debug__. 

I checked in a fix to IDLE too, but it seems you were using an
externally-installed version of IDLE.

> Simply commenting-out the assignment (to zero) did no good: Inside the
> __debug__ blocks, IDLE would try to perform print statements, which
> would write to the re-assigned sys.stdout, which would invoke the code
> that had the __debug__, which would give up thanks to infinite
> recursion. So essentially, you either have to remove the __debug__
> blocks, or rewrite them to writing to save_stdout - in which case all
> the ColorDelegator debug message appear in the terminal window.

IDLE was totally abusing the __debug__ variable -- in the fix, I
simply changed all occurrences of __debug__ to DEBUG.

> So anybody porting to Python 2.1 will essentially have to remove all
> __debug__ blocks that were previously disabled by assigning 0 to
> __debug__. I think this is undesirable.

Assigning to __debug__ was never well-defined.  You used it at your
own risk.

> As I recall, in the original description of __debug__, being able to
> assign to it was reported as one of its main features, so that you
> still had a run-time option (unless the interpreter was running with
> -O, which eliminates the __debug__ blocks).

The manual has always used words that suggest that there is something
special about __debug__.  And there was: the compiler assumed it could
eliminate blocks started with "if __debug__:" when compiling in -O
mode.  Also, assert statements have always used LOAD_GLOBAL to
retrieve the __debug__ variable.

> So in short, I think this change should be reverted.

It's possible that it breaks more code, and it's possible that we end
up having to change the error into a warning for now.  But I insist
that assignment to __debug__ should become illegal.  You can *use* the
variable (to determine whether -O is on or not), but you can't *set*
it.

> Regards,
> Martin
> 
> P.S. What was the motivation for that change, anyway?

To enforce a restriction that was always intended: __debug__ should be
a read-only variable.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mal@lemburg.com  Fri Mar 30 13:42:59 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 15:42:59 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
Message-ID: <3AC48D63.A8AFA489@lemburg.com>

Guido van Rossum wrote:
> > ...
> > So anybody porting to Python 2.1 will essentially have to remove all
> > __debug__ blocks that were previously disabled by assigning 0 to
> > __debug__. I think this is undesirable.
> 
> Assigning to __debug__ was never well-defined.  You used it at your
> own risk.
> 
> > As I recall, in the original description of __debug__, being able to
> > assign to it was reported as one of its main features, so that you
> > still had a run-time option (unless the interpreter was running with
> > -O, which eliminates the __debug__ blocks).
> 
> The manual has always used words that suggest that there is something
> special about __debug__.  And there was: the compiler assumed it could
> eliminate blocks started with "if __debug__:" when compiling in -O
> mode.  Also, assert statements have always used LOAD_GLOBAL to
> retrieve the __debug__ variable.
> 
> > So in short, I think this change should be reverted.
> 
> It's possible that it breaks more code, and it's possible that we end
> up having to change the error into a warning for now.  But I insist
> that assignment to __debug__ should become illegal.  You can *use* the
> variable (to determine whether -O is on or not), but you can't *set*
> it.
> 
> > Regards,
> > Martin
> >
> > P.S. What was the motivation for that change, anyway?
> 
> To enforce a restriction that was always intended: __debug__ should be
> a read-only variable.

So you are suggesting that we change all our code to something like:

__enable_debug__ = 0 # set to 0 for production mode

...

if __debug__ and __enable_debug__:
   print 'debugging information'

...

I don't see the point in having to introduce a new variable
just to disable debugging code in Python code which does not
run under -O.

What does defining __debug__ as read-only variable buy us 
in the long term ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From guido@digicool.com  Fri Mar 30 14:02:35 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 09:02:35 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 15:42:59 +0200."
 <3AC48D63.A8AFA489@lemburg.com>
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
 <3AC48D63.A8AFA489@lemburg.com>
Message-ID: <200103301402.JAA23365@cj20424-a.reston1.va.home.com>

> So you are suggesting that we change all our code to something like:
> 
> __enable_debug__ = 0 # set to 0 for production mode
> 
> ...
> 
> if __debug__ and __enable_debug__:
>    print 'debugging information'
> 
> ...

I can't suggest anything, because I have no idea what semantics you
are assuming for __debug__ here, and I have no idea what you want with
that code.  Maybe you'll want to say "__debug__ = 1" even when you are
in -O mode -- that will definitely not work!

The form above won't (currently) be optimized out -- only "if
__debug__:" is optimized away, nothing more complicated (not even "if
(__debug__):".

In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
__UNDERSCORE__ CONVENTION!  Those names are reserved for the
interpreter, and you risk that they will be assigned a different
semantics in the future.

> I don't see the point in having to introduce a new variable
> just to disable debugging code in Python code which does not
> run under -O.
> 
> What does defining __debug__ as read-only variable buy us 
> in the long term ?

It allows the compiler to assume that __debug__ is a built-in name.
In the future, the __debug__ variable may become meaningless, as we
develop more differentiated optimization options.

The *only* acceptable use for __debug__ is to get rid of code that is
essentially an assertion but can't be spelled with just an assertion,
e.g.

def f(L):
    if __debug__:
        # Assert L is a list of integers:
        for item in L:
            assert isinstance(item, type(1))
    ...

--Guido van Rossum (home page: http://www.python.org/~guido/)


From fredrik@pythonware.com  Fri Mar 30 14:07:08 2001
From: fredrik@pythonware.com (Fredrik Lundh)
Date: Fri, 30 Mar 2001 16:07:08 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>             <3AC48D63.A8AFA489@lemburg.com>  <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <018001c0b922$b58b5d50$0900a8c0@SPIFF>

guido wrote:
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!

is the "__version__" convention documented somewhere?

Cheers /F



From moshez@zadka.site.co.il  Fri Mar 30 14:21:27 2001
From: moshez@zadka.site.co.il (Moshe Zadka)
Date: Fri, 30 Mar 2001 16:21:27 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <018001c0b922$b58b5d50$0900a8c0@SPIFF>
References: <018001c0b922$b58b5d50$0900a8c0@SPIFF>, <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>             <3AC48D63.A8AFA489@lemburg.com>  <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <E14izmJ-0006yR-00@darjeeling>

On Fri, 30 Mar 2001, "Fredrik Lundh" <fredrik@pythonware.com> wrote:
 
> is the "__version__" convention documented somewhere?

Yes. I don't remember where, but the words are something like "the __ names
are reserved for use by the infrastructure, loosly defined as the interpreter
and the standard library. Code which has aspirations to be part of the
infrastructure must use a unique prefix like __bobo_pos__"

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez@debian.org  |http://www.{python,debian,gnu}.org


From guido@digicool.com  Fri Mar 30 14:40:00 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 09:40:00 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 16:07:08 +0200."
 <018001c0b922$b58b5d50$0900a8c0@SPIFF>
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
 <018001c0b922$b58b5d50$0900a8c0@SPIFF>
Message-ID: <200103301440.JAA23550@cj20424-a.reston1.va.home.com>

> guido wrote:
> > In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> > __UNDERSCORE__ CONVENTION!
> 
> is the "__version__" convention documented somewhere?

This is a trick question, right?  :-)

__version__ may not be documented but is in de-facto use.  Folks
introducing other names (e.g. __author__, __credits__) should really
consider a PEP before grabbing a piece of the namespace.

--Guido van Rossum (home page: http://www.python.org/~guido/)


From mal@lemburg.com  Fri Mar 30 15:10:17 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 17:10:17 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
 <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <3AC4A1D9.9D4C5BF7@lemburg.com>

Guido van Rossum wrote:
> 
> > So you are suggesting that we change all our code to something like:
> >
> > __enable_debug__ = 0 # set to 0 for production mode
> >
> > ...
> >
> > if __debug__ and __enable_debug__:
> >    print 'debugging information'
> >
> > ...
> 
> I can't suggest anything, because I have no idea what semantics you
> are assuming for __debug__ here, and I have no idea what you want with
> that code.  Maybe you'll want to say "__debug__ = 1" even when you are
> in -O mode -- that will definitely not work!

I know, but that's what I'm expecting. The point was to be able
to disable debugging code when running Python in non-optimized mode.
We'd have to change our code and use a new variable to work
around the SyntaxError exception.

While this is not so much of a problem for new code, existing code
will break (ie. not byte-compile anymore) in Python 2.1. 

A warning would be OK, but adding yet another SyntaxError for previously 
perfectly valid code is not going to make the Python users out there 
very happy... the current situation with two different settings
in common use out there (Python 1.5.2 and 2.0) is already a pain
to maintain due to the issues on Windows platforms (due to DLL 
problems).

I don't think that introducing even more subtle problems in 2.1
is going to be well accepted by Joe User.
 
> The form above won't (currently) be optimized out -- only "if
> __debug__:" is optimized away, nothing more complicated (not even "if
> (__debug__):".

Ok, make the code look like this then:

if __debug__:
   if enable_debug:
       print 'debug info'
 
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!  Those names are reserved for the
> interpreter, and you risk that they will be assigned a different
> semantics in the future.

Hey, this was just an example... ;-)

> > I don't see the point in having to introduce a new variable
> > just to disable debugging code in Python code which does not
> > run under -O.
> >
> > What does defining __debug__ as read-only variable buy us
> > in the long term ?
> 
> It allows the compiler to assume that __debug__ is a built-in name.
> In the future, the __debug__ variable may become meaningless, as we
> develop more differentiated optimization options.
> 
> The *only* acceptable use for __debug__ is to get rid of code that is
> essentially an assertion but can't be spelled with just an assertion,
> e.g.
> 
> def f(L):
>     if __debug__:
>         # Assert L is a list of integers:
>         for item in L:
>             assert isinstance(item, type(1))
>     ...

Maybe just me, but I use __debug__ a lot to do extra logging or 
printing in my code too; not just for assertions.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From barry@digicool.com  Fri Mar 30 15:38:48 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Fri, 30 Mar 2001 10:38:48 -0500
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
 <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
 <3AC48D63.A8AFA489@lemburg.com>
 <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <15044.43144.133911.800065@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido@digicool.com> writes:

    GvR> The *only* acceptable use for __debug__ is to get rid of code
    GvR> that is essentially an assertion but can't be spelled with
    GvR> just an assertion, e.g.

Interestingly enough, last night Jim Fulton and I talked about a
situation where you might want asserts to survive running under -O,
because you want to take advantage of other optimizations, but you
still want to assert certain invariants in your code.

Of course, you can do this now by just not using the assert
statement.  So that's what we're doing, and for giggles we're multiply
inheriting the exception we raise from AssertionError and our own
exception.  What I think we'd prefer is a separate switch to control
optimization and the disabling of assert.

-Barry


From thomas.heller@ion-tof.com  Fri Mar 30 15:43:00 2001
From: thomas.heller@ion-tof.com (Thomas Heller)
Date: Fri, 30 Mar 2001 17:43:00 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <0a8201c0b930$19fc0750$e000a8c0@thomasnotebook>

IMO the fix to this bug should also go into 2.0.1:

Bug id 231064, sys.path not set correctly in embedded python interpreter

which is fixed in revision 1.23 of PC/getpathp.c


Thomas Heller



From thomas@xs4all.net  Fri Mar 30 15:48:28 2001
From: thomas@xs4all.net (Thomas Wouters)
Date: Fri, 30 Mar 2001 17:48:28 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <15044.43144.133911.800065@anthem.wooz.org>; from barry@digicool.com on Fri, Mar 30, 2001 at 10:38:48AM -0500
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com> <15044.43144.133911.800065@anthem.wooz.org>
Message-ID: <20010330174828.K13066@xs4all.nl>

On Fri, Mar 30, 2001 at 10:38:48AM -0500, Barry A. Warsaw wrote:

> What I think we'd prefer is a separate switch to control
> optimization and the disabling of assert.

You mean something like

#!/usr/bin/python -fno-asserts -fno_debug_ -fdocstrings -fdeadbranch 

Right!-ly y'rs,
-- 
Thomas Wouters <thomas@xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!


From Paul.Moore@uk.origin-it.com  Fri Mar 30 15:52:04 2001
From: Paul.Moore@uk.origin-it.com (Moore, Paul)
Date: Fri, 30 Mar 2001 16:52:04 +0100
Subject: [Python-Dev] PEP: Use site-packages on all platforms
Message-ID: <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com>

It was suggested that I post this to python-dev, as well as python-list and
the distutils SIG. I apologise if this is being done backwards? Should I get
a proper PEP number first, or is it appropriate to ask for initial comments
like this?

Paul

-----Original Message-----
From: Moore, Paul 
Sent: 30 March 2001 13:32
To: distutils-sig@python.org
Cc: 'python-list@python.org'
Subject: [Distutils] PEP: Use site-packages on all platforms


Attached is a first draft of a proposal to use the "site-packages" directory
for locally installed modules, on all platforms instead of just on Unix. If
the consensus is that this is a worthwhile proposal, I'll submit it as a
formal PEP.

Any advice or suggestions welcomed - I've never written a PEP before - I
hope I've got the procedure right...

Paul Moore

PEP: TBA
Title: Install local packages in site-packages on all platforms
Version $Revision$
Author: Paul Moore <gustav@morpheus.demon.co.uk>
Status: Draft
Type: Standards Track
Python-Version: 2.2
Created: 2001-03-30
Post-History: TBA

Abstract

    The standard Python distribution includes a directory Lib/site-packages,
    which is used on Unix platforms to hold locally-installed modules and
    packages. The site.py module distributed with Python includes support
for
    locating modules in this directory.

    This PEP proposes that the site-packages directory should be used
    uniformly across all platforms for locally installed modules.


Motivation

    On Windows platforms, the default setting for sys.path does not include
a
    directory suitable for users to install locally-developed modules. The
    "expected" location appears to be the directory containing the Python
    executable itself. Including locally developed code in the same
directory
    as installed executables is not good practice.

    Clearly, users can manipulate sys.path, either in a locally modified
    site.py, or in a suitable sitecustomize.py, or even via .pth files.
    However, there should be a standard location for such files, rather than
    relying on every individual site having to set their own policy.

    In addition, with distutils becoming more prevalent as a means of
    distributing modules, the need for a standard install location for
    distributed modules will become more common. It would be better to
define
    such a standard now, rather than later when more distutils-based
packages
    exist which will need rebuilding.

    It is relevant to note that prior to Python 2.1, the site-packages
    directory was not included in sys.path for Macintosh platforms. This has
    been changed in 2.1, and Macintosh includes sys.path now, leaving
Windows
    as the only major platform with no site-specific modules directory.


Implementation

    The implementation of this feature is fairly trivial. All that would be
    required is a change to site.py, to change the section setting sitedirs.
    The Python 2.1 version has

        if os.sep == '/':
            sitedirs = [makepath(prefix,
                                 "lib",
                                 "python" + sys.version[:3],
                                 "site-packages"),
                        makepath(prefix, "lib", "site-python")]
        elif os.sep == ':':
            sitedirs = [makepath(prefix, "lib", "site-packages")]
        else:
            sitedirs = [prefix]

    A suitable change would be to simply replace the last 4 lines with

        else:
            sitedirs = [makepath(prefix, "lib", "site-packages")]

    Changes would also be required to distutils, in the sysconfig.py file.
It
    is worth noting that this file does not seem to have been updated in
line
    with the change of policy on the Macintosh, as of this writing.

Notes

    1. It would be better if this change could be included in Python 2.1, as
       changing something of this nature is better done sooner, rather than
       later, to reduce the backward-compatibility burden. This is extremely
       unlikely to happen at this late stage in the release cycle, however.

    2. This change does not preclude packages using the current location -
       the change only adds a directory to sys.path, it does not remove
       anything.

    3. In the Windows distribution of Python 2.1 (beta 1), the
       Lib\site-packages directory has been removed. It would need to be
       reinstated.


Copyright

    This document has been placed in the public domain.

_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
http://mail.python.org/mailman/listinfo/distutils-sig


From mal@lemburg.com  Fri Mar 30 16:09:26 2001
From: mal@lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 18:09:26 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com> <15044.43144.133911.800065@anthem.wooz.org> <20010330174828.K13066@xs4all.nl>
Message-ID: <3AC4AFB6.23A17755@lemburg.com>

Thomas Wouters wrote:
> 
> On Fri, Mar 30, 2001 at 10:38:48AM -0500, Barry A. Warsaw wrote:
> 
> > What I think we'd prefer is a separate switch to control
> > optimization and the disabling of assert.
> 
> You mean something like
> 
> #!/usr/bin/python -fno-asserts -fno_debug_ -fdocstrings -fdeadbranch

Sounds like a good idea, but how do you tell the interpreter
which asserts to leave enabled and which to remove from the 
code ?

In general, I agree, though: a more fine grained control
over optimizations would be a Good Thing (even more since we
are talking about non-existing code analysis tools here ;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/


From paul@pfdubois.com  Fri Mar 30 17:01:39 2001
From: paul@pfdubois.com (Paul F. Dubois)
Date: Fri, 30 Mar 2001 09:01:39 -0800
Subject: [Python-Dev] Assigning to __debug__
Message-ID: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>

FWIW, this change broke a lot of my code and it took an hour or two to fix
it. I too was misled by the wording when __debug__ was introduced. I could
swear there were even examples of assigning to it, but maybe I'm dreaming.
Anyway, I thought I could.

Regardless of my delusions, this is another change that breaks code in the
middle of a beta cycle. I think that is not a good thing. It is one thing
when one goes to get a new beta or alpha; you expect to spend some time
then. It is another when one has been a good soldier and tried the beta and
is now using it for routine work and updating to a new version of it breaks
something because someone thought it ought to be broken. (If I don't use it
for my work I certainly won't find any problems with it). I realize that
this can't be a hard and fast rule but I think this one in particular
deserves warning status now and change in 2.2.



From barry@digicool.com  Fri Mar 30 17:16:28 2001
From: barry@digicool.com (Barry A. Warsaw)
Date: Fri, 30 Mar 2001 12:16:28 -0500
Subject: [Python-Dev] Assigning to __debug__
References: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>
Message-ID: <15044.49004.757215.882179@anthem.wooz.org>

>>>>> "PFD" == Paul F Dubois <paul@pfdubois.com> writes:

    PFD> Regardless of my delusions, this is another change that
    PFD> breaks code in the middle of a beta cycle.

I agree with Paul.  It's too late in the beta cycle to break code, and
I /also/ dimly remember assignment to __debug__ being semi-blessed.

Let's make it a warning or revert the change.

-Barry


From guido@digicool.com  Fri Mar 30 17:19:31 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:19:31 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 10:38:48 EST."
 <15044.43144.133911.800065@anthem.wooz.org>
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
 <15044.43144.133911.800065@anthem.wooz.org>
Message-ID: <200103301719.MAA24153@cj20424-a.reston1.va.home.com>

>     GvR> The *only* acceptable use for __debug__ is to get rid of code
>     GvR> that is essentially an assertion but can't be spelled with
>     GvR> just an assertion, e.g.
> 
> Interestingly enough, last night Jim Fulton and I talked about a
> situation where you might want asserts to survive running under -O,
> because you want to take advantage of other optimizations, but you
> still want to assert certain invariants in your code.
> 
> Of course, you can do this now by just not using the assert
> statement.  So that's what we're doing, and for giggles we're multiply
> inheriting the exception we raise from AssertionError and our own
> exception.  What I think we'd prefer is a separate switch to control
> optimization and the disabling of assert.

That's one of the things I was alluding to when I talked about more
diversified control over optimizations.  I guess then the __debug__
variable would indicate whether or not assertions are turned on;
something else would let you query the compiler's optimization level.
But assigning to __debug__ still wouldn't do what you wanted (unless
we decided to *make* this the way to turn assertions on or off in a
module -- but since this is a compile-time thing, it would require
that the rhs of the assignment was a constant).

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Fri Mar 30 17:37:37 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:37:37 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 09:01:39 PST."
 <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>
References: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>
Message-ID: <200103301737.MAA24325@cj20424-a.reston1.va.home.com>

> FWIW, this change broke a lot of my code and it took an hour or two to fix
> it. I too was misled by the wording when __debug__ was introduced. I could
> swear there were even examples of assigning to it, but maybe I'm dreaming.
> Anyway, I thought I could.
> 
> Regardless of my delusions, this is another change that breaks code in the
> middle of a beta cycle. I think that is not a good thing. It is one thing
> when one goes to get a new beta or alpha; you expect to spend some time
> then. It is another when one has been a good soldier and tried the beta and
> is now using it for routine work and updating to a new version of it breaks
> something because someone thought it ought to be broken. (If I don't use it
> for my work I certainly won't find any problems with it). I realize that
> this can't be a hard and fast rule but I think this one in particular
> deserves warning status now and change in 2.2.

OK, this is the second confirmed report of broken 3rd party code, so
we'll change this into a warning.  Jeremy, that should be easy, right?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From guido@digicool.com  Fri Mar 30 17:41:41 2001
From: guido@digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:41:41 -0500
Subject: [Python-Dev] PEP: Use site-packages on all platforms
In-Reply-To: Your message of "Fri, 30 Mar 2001 16:52:04 +0100."
 <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com>
References: <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com>
Message-ID: <200103301741.MAA24378@cj20424-a.reston1.va.home.com>

I think this is a good idea.  Submit the PEP to Barry!

I doubt that we can introduce this into Python 2.1 this late in the
release cycle.  Would that be a problem?

--Guido van Rossum (home page: http://www.python.org/~guido/)


From martin@loewis.home.cs.tu-berlin.de  Fri Mar 30 18:31:31 2001
From: martin@loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 30 Mar 2001 20:31:31 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <200103301330.IAA23144@cj20424-a.reston1.va.home.com> (message
 from Guido van Rossum on Fri, 30 Mar 2001 08:30:18 -0500)
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
Message-ID: <200103301831.f2UIVVm01525@mira.informatik.hu-berlin.de>

> I checked in a fix to IDLE too, but it seems you were using an
> externally-installed version of IDLE.

Sorry about that, I used actually one from CVS: with a sticky 2.0 tag
:-(

> Assigning to __debug__ was never well-defined.  You used it at your
> own risk.

When __debug__ was first introduced, the NEWS entry read

# Without -O, the assert statement actually generates code that first
# checks __debug__; if this variable is false, the assertion is not
# checked.  __debug__ is a built-in variable whose value is
# initialized to track the -O flag (it's true iff -O is not
# specified).  With -O, no code is generated for assert statements,
# nor for code of the form ``if __debug__: <something>''.

So it clearly says that it is a variable, and that assert will check
its value at runtime. I can't quote any specific messages, but I
recall that you've explained it that way also in the public.

Regards,
Martin


From tim.one@home.com  Fri Mar 30 20:17:00 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 30 Mar 2001 15:17:00 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <018001c0b922$b58b5d50$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFMJJAA.tim.one@home.com>

[Guido]
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!

[/F]
> is the "__version__" convention documented somewhere?

In the Language Reference manual, section "Reserved classes of identifiers",
middle line of the table.  It would benefit from more words, though (it just
says "System-defined name" now, and hostile users are known to have trouble
telling themselves apart from "the system" <wink>).



From tim.one@home.com  Fri Mar 30 20:30:53 2001
From: tim.one@home.com (Tim Peters)
Date: Fri, 30 Mar 2001 15:30:53 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <200103301831.f2UIVVm01525@mira.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFPJJAA.tim.one@home.com>

Take a trip down memory lane:

    http://groups.yahoo.com/group/python-list/message/19647

That's the c.l.py msg in which Guido first introduced the idea of __debug__
(and DAMN was searching life easier before DejaNews lost its memory!).

The debate immediately following that (cmdline arguments and all) is being
reinvented here now.

Nothing actually changed from Guido's first proposal (above), except that he
gave up his opposition to making "assert" a reserved word (for which
far-seeing flexibility I am still most grateful), and he actually implemented
the "PS here's a variant" flavor.

I wasn't able to find anything in that debate where Guido explicitly said you
couldn't bind __debug__ yourself, but neither could I find anything saying
you could, and I believe him when he says "no binding" was the *intent*
(that's most consistent with everything he said at the time).

those-who-don't-remember-the-past-are-doomed-to-read-me-nagging-them-
    about-it<wink>-ly y'rs  - tim



From clee@v1.wustl.edu  Sat Mar 31 15:08:15 2001
From: clee@v1.wustl.edu (Christopher Lee)
Date: Sat, 31 Mar 2001 09:08:15 -0600 (CST)
Subject: [Python-Dev] submitted patch to linuxaudiodev
Message-ID: <15045.62175.301007.35652@gnwy100.wuh.wustl.edu>

I'm a long-time listener/first-time caller and would like to know what I
should do to have my patch examined.  I've included a description of the
patch below.

Cheers,

-chris

-----------------------------------------------------------------------------
[reference: python-Patches #412553]

Problem:

test_linuxaudiodev.py  failed with "Resource temporarily busy message"
(under the cvs version of python)

Analysis:

The lad_write() method attempts to write continuously to /dev/dsp (or 
equivalent); when the audio buffer fills, write() returns an error code and
errorno is set to EAGAIN, indicating that the device buffer is full.  The
lad_write() interprets this as an error and instead of trying to write
again returns NULL.

Solution:

Use select() to check when the audio device becomes writable and test for
EAGAIN after doing a write().  I've submitted patch #412553 that implements
this solution. (use python21-lihnuxaudiodev.c-version2.diff).  With this
patch, test_linuxaudiodev.py passes.  This patch may also be relevant for
the python 2.0.1 bugfix release.


System configuration:

linux kernel 2.4.2 and 2.4.3 SMP on a dual processor i686 with the
soundblaster live value soundcard.




From tim.one at home.com  Thu Mar  1 00:01:34 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 18:01:34 -0500
Subject: [Python-Dev] Very recent test_global failure
In-Reply-To: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>

> Just fixed.

Not fixed; can no longer compile Python:

compile.c
C:\Code\python\dist\src\Python\compile.c(4184) :
    error C2065: 'DEF_BOUND' : undeclared identifier




From ping at lfw.org  Thu Mar  1 00:11:59 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 15:11:59 -0800 (PST)
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <Pine.LNX.4.10.10102270054110.21681-100000@localhost>
Message-ID: <Pine.LNX.4.10.10102281508520.21681-100000@localhost>

Hi again.

On Tue, 27 Feb 2001, Ka-Ping Yee wrote:
> 
> 1.  The error message for UnboundLocalError isn't really accurate.
[...]
>         UnboundLocalError: local name 'x' is not defined

I'd like to check in this change today to make it into the beta.
It's a tiny change, shouldn't break anything as i don't see how
code would rely on the wording of the message, and makes the
message more accurate.  Lib/test/test_scope.py checks for the
error but does not rely on its wording.

If i don't see objections i'll do this tonight.  I hope this is
minor enough not to be a violation of etiquette.


-- ?!ng




From tim.one at home.com  Thu Mar  1 00:13:04 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 18:13:04 -0500
Subject: [Python-Dev] Very recent test_global failure
In-Reply-To: <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAENOJCAA.tim.one@home.com>

> Oops.  Missed a checkin to symtable.h.
>
> unix-users-prepare-to-recompile-everything-ly y'rs,
> Jeremy

Got that patch, everything compiles now, but test_global still fails.  Are
we, perhaps, missing an update to test_global's expected-output file too?




From tim.one at home.com  Thu Mar  1 00:21:15 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 18:21:15 -0500
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <Pine.LNX.4.10.10102281508520.21681-100000@localhost>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com>

[Ka-Ping Yee]
> On Tue, 27 Feb 2001, Ka-Ping Yee wrote:
> >
> > 1.  The error message for UnboundLocalError isn't really accurate.
> [...]
> >         UnboundLocalError: local name 'x' is not defined
>
> I'd like to check in this change today to make it into the beta.
> It's a tiny change, shouldn't break anything as i don't see how
> code would rely on the wording of the message, and makes the
> message more accurate.  Lib/test/test_scope.py checks for the
> error but does not rely on its wording.
>
> If i don't see objections i'll do this tonight.  I hope this is
> minor enough not to be a violation of etiquette.

Sorry, but I really didn't like this change.  You had to contrive a test case
using "del" for the old

    local variable 'x' referenced before assignment

msg to appear inaccurate the way you read it.  The old msg is much more
on-target 99.999% of the time than just saying "not defined", in
non-contrived test cases.  Even in the  "del" case, it's *still* the case
that the vrbl was referenced before assignment (but after "del").

So -1, on the grounds that the new msg is worse (because less specific)
almost all the time.




From guido at digicool.com  Thu Mar  1 00:25:30 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 18:25:30 -0500
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: Your message of "Wed, 28 Feb 2001 15:11:59 PST."
             <Pine.LNX.4.10.10102281508520.21681-100000@localhost> 
References: <Pine.LNX.4.10.10102281508520.21681-100000@localhost> 
Message-ID: <200102282325.SAA31347@cj20424-a.reston1.va.home.com>

> On Tue, 27 Feb 2001, Ka-Ping Yee wrote:
> > 
> > 1.  The error message for UnboundLocalError isn't really accurate.
> [...]
> >         UnboundLocalError: local name 'x' is not defined
> 
> I'd like to check in this change today to make it into the beta.
> It's a tiny change, shouldn't break anything as i don't see how
> code would rely on the wording of the message, and makes the
> message more accurate.  Lib/test/test_scope.py checks for the
> error but does not rely on its wording.
> 
> If i don't see objections i'll do this tonight.  I hope this is
> minor enough not to be a violation of etiquette.

+1, but first address the comments about test_inspect.py with -O.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From nas at arctrix.com  Thu Mar  1 00:30:23 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Wed, 28 Feb 2001 15:30:23 -0800
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com>; from tim.one@home.com on Wed, Feb 28, 2001 at 06:21:15PM -0500
References: <Pine.LNX.4.10.10102281508520.21681-100000@localhost> <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com>
Message-ID: <20010228153023.A5998@glacier.fnational.com>

On Wed, Feb 28, 2001 at 06:21:15PM -0500, Tim Peters wrote:
> So -1, on the grounds that the new msg is worse (because less specific)
> almost all the time.

I too vote -1 on the proposed new message (but not -1 on changing
to current message).

  Neil



From guido at digicool.com  Thu Mar  1 00:37:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 18:37:01 -0500
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: Your message of "Wed, 28 Feb 2001 18:21:15 EST."
             <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com> 
Message-ID: <200102282337.SAA31934@cj20424-a.reston1.va.home.com>

Based on Tim's comment I change my +1 into a -1.  I had forgotten the
context.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Thu Mar  1 01:02:39 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 19:02:39 -0500
Subject: [Python-Dev] New fatal error in toaiff.py
Message-ID: <LNBBLJKPBEHFEDALKOLCAEOFJCAA.tim.one@home.com>

>python
Python 2.1a2 (#10, Feb 28 2001, 14:06:44) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import toaiff
Fatal Python error: unknown scope for _toaiff in ?(0) in
    c:\code\python\dist\src\lib\toaiff.py

abnormal program termination

>




From ping at lfw.org  Thu Mar  1 01:13:40 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 16:13:40 -0800 (PST)
Subject: [Python-Dev] pydoc for CLI-less platforms
Message-ID: <Pine.LNX.4.10.10102281605370.21681-100000@localhost>

For platforms without a command-line like Windows and Mac,
pydoc will probably be used most often as a web server.
The version in CVS right now runs the server invisibly in
the background.  I just added a little GUI to control it
but i don't have an available Windows platform to test on
right now.  If you happen to have a few minutes to spare
and Windows 9x/NT/2k or a Mac, i would really appreciate
if you could give

    http://www.lfw.org/python/pydoc.py

a quick whirl.  It is intended to be invoked on Windows
platforms eventually as pydoc.pyw, so ignore the DOS box
that appears and let me know if the GUI works and behaves
sensibly for you.  When it's okay, i'll check it in.

Many thanks,


-- ?!ng


Windows and Mac compatibility changes:
    handle both <function foo at 0x827a18> and <function foo at 005D7C80>
    normalize case of paths on sys.path to get rid of duplicates
    change 'localhost' to '127.0.0.1' (Mac likes this better)
    add a tiny GUI for stopping the web server




From ping at lfw.org  Thu Mar  1 01:31:19 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 16:31:19 -0800 (PST)
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <200102282325.SAA31347@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10102281630330.21681-100000@localhost>

On Wed, 28 Feb 2001, Guido van Rossum wrote:
> +1, but first address the comments about test_inspect.py with -O.

Okay, will do (will fix test_inspect, won't change UnboundLocalError).


-- ?!ng




From pedroni at inf.ethz.ch  Thu Mar  1 01:57:45 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 01:57:45 +0100
Subject: [Python-Dev] nested scopes. global: have I got it right?
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>

Hi. Is the following true?

PEP227 states:
"""
If the global statement occurs within a block, all uses of the
name specified in the statement refer to the binding of that name
in the top-level namespace.
"""

but this is a bit ambiguous, because the global decl (I imagine for
backw-compatibility)
does not affect the code blocks of nested (func) definitions. So

x=7
def f():
  global x
  def g():
    exec "x=3"
    return x
  print g()

f()

prints 3, not 7.


PS: this improve backw-compatibility but the PEP is ambiguous or block concept
does
not imply nested definitions(?). This affects only special cases but it is
quite strange in presence
of nested scopes, having decl that do not extend to inner scopes.




From guido at digicool.com  Thu Mar  1 02:08:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 20:08:32 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 01:57:45 +0100."
             <000d01c0a1ea$a1d53e60$f55821c0@newmexico> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>  
            <000d01c0a1ea$a1d53e60$f55821c0@newmexico> 
Message-ID: <200103010108.UAA00516@cj20424-a.reston1.va.home.com>

> Hi. Is the following true?
> 
> PEP227 states:
> """
> If the global statement occurs within a block, all uses of the
> name specified in the statement refer to the binding of that name
> in the top-level namespace.
> """
> 
> but this is a bit ambiguous, because the global decl (I imagine for
> backw-compatibility)
> does not affect the code blocks of nested (func) definitions. So
> 
> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
> 
> f()
> 
> prints 3, not 7.

Unclear whether this should change.  The old rule can also be read as
"you have to repeat 'global' for a variable in each scope where you
intend to assign to it".

> PS: this improve backw-compatibility but the PEP is ambiguous or
> block concept does not imply nested definitions(?). This affects
> only special cases but it is quite strange in presence of nested
> scopes, having decl that do not extend to inner scopes.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pedroni at inf.ethz.ch  Thu Mar  1 02:24:53 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 02:24:53 +0100
Subject: [Python-Dev] nested scopes. global: have I got it right?
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>             <000d01c0a1ea$a1d53e60$f55821c0@newmexico>  <200103010108.UAA00516@cj20424-a.reston1.va.home.com>
Message-ID: <005301c0a1ee$6c30cdc0$f55821c0@newmexico>

I didn't want to start a discussion, I was more concerned if I got the semantic
(that I should impl) right.
So:
  x=7
  def f():
     x=1
     def g():
       global x
       def h(): return x
       return h()
     return g()

will print 1. Ok.

regards.

PS: I tried this with a2 and python just died, I imagine, this has been fixed.





From guido at digicool.com  Thu Mar  1 02:42:49 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 20:42:49 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 02:24:53 +0100."
             <005301c0a1ee$6c30cdc0$f55821c0@newmexico> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <200103010108.UAA00516@cj20424-a.reston1.va.home.com>  
            <005301c0a1ee$6c30cdc0$f55821c0@newmexico> 
Message-ID: <200103010142.UAA00686@cj20424-a.reston1.va.home.com>

> I didn't want to start a discussion, I was more concerned if I got
> the semantic (that I should impl) right.
> So:
>   x=7
>   def f():
>      x=1
>      def g():
>        global x
>        def h(): return x
>        return h()
>      return g()

and then print f() as main, right?

> will print 1. Ok.
> 
> regards.

Argh!  I honestly don't know what this ought to do.  Under the rules
as I currently think of them this would print 1.  But that's at least
surprising, so maybe we'll have to revisit this.

Jeremy, also please note that if I add "from __future__ import
nested_scopes" to the top, this dumps core, saying: 

    lookup 'x' in g 2 -1
    Fatal Python error: com_make_closure()
    Aborted (core dumped)

Maybe you can turn this into a regular error? <0.5 wink>

> PS: I tried this with a2 and python just died, I imagine, this has
> been fixed.

Seems so. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Thu Mar  1 03:11:25 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 21:11:25 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEOMJCAA.tim.one@home.com>

[Samuele Pedroni]
> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
>
> f()
>
> prints 3, not 7.

Note the the Ref Man (section on the global stmt) adds some more wrinkles:

    ...
    global is a directive to the parser.  It applies only to code
    parsed at the same time as the global statement.  In particular,
    a global statement contained in an exec statement does not
    affect the code block containing the exec statement, and code
    contained in an exec statement is unaffected by global statements
    in the code containing the exec statement.  The same applies to the
    eval(), execfile() and compile() functions.


From Jason.Tishler at dothill.com  Thu Mar  1 03:44:47 2001
From: Jason.Tishler at dothill.com (Jason Tishler)
Date: Wed, 28 Feb 2001 21:44:47 -0500
Subject: [Python-Dev] Re: Case-sensitive import
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com>; from tim.one@home.com on Wed, Feb 28, 2001 at 05:21:02PM -0500
References: <20010228151728.Q449@dothill.com> <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com>
Message-ID: <20010228214447.I252@dothill.com>

Tim,

On Wed, Feb 28, 2001 at 05:21:02PM -0500, Tim Peters wrote:
> And thank you for your Cygwin work --

Your welcome -- I appreciate the willingness of the core Python team to
consider Cygwin related patches.

> someday I hope to use Cygwin for more
> than just running "patch" on this box <sigh> ...

Be careful!  First, you may use grep occasionally.  Next, you may find
yourself writing shell scripts.  Before you know it, you have crossed
over to the Unix side.  You have been warned! :,)

Thanks,
Jason

-- 
Jason Tishler
Director, Software Engineering       Phone: +1 (732) 264-8770 x235
Dot Hill Systems Corp.               Fax:   +1 (732) 264-8798
82 Bethany Road, Suite 7             Email: Jason.Tishler at dothill.com
Hazlet, NJ 07730 USA                 WWW:   http://www.dothill.com



From greg at cosc.canterbury.ac.nz  Thu Mar  1 03:58:06 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 01 Mar 2001 15:58:06 +1300 (NZDT)
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEOMJCAA.tim.one@home.com>
Message-ID: <200103010258.PAA02214@s454.cosc.canterbury.ac.nz>

Quoth the Samuele Pedroni:

> In particular,
> a global statement contained in an exec statement does not
> affect the code block containing the exec statement, and code
> contained in an exec statement is unaffected by global statements
> in the code containing the exec statement.

I think this is broken. As long as we're going to allow
exec-with-1-arg to implicitly mess with the current namespace,
names in the exec'ed statement should have the same meanings
as they do in the surrounding statically-compiled code.

So, global statements in the surrounding scope should be honoured
in the exec'ed statement, and global statements should be disallowed
within the exec'ed statement.

Better still, get rid of both exec-with-1-arg and locals()
altogether...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From fdrake at users.sourceforge.net  Thu Mar  1 06:20:23 2001
From: fdrake at users.sourceforge.net (Fred L. Drake)
Date: Wed, 28 Feb 2001 21:20:23 -0800
Subject: [Python-Dev] [development doc updates]
Message-ID: <E14YLVn-0003XL-00@usw-pr-shell2.sourceforge.net>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/





From jeremy at alum.mit.edu  Thu Mar  1 06:49:33 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 00:49:33 -0500 (EST)
Subject: [Python-Dev] code objects leakin'
Message-ID: <15005.58093.314004.571576@w221.z064000254.bwi-md.dsl.cnc.net>

It looks like code objects are leaked with surprising frequency.  I
added a simple counter that records all code object allocs and
deallocs.  For many programs, the net is zero.  For some, including
setup.py and the regression test, it's much larger than zero.

I've got no time to look at this before the beta, but perhaps someone
else does.  Even if it can't be fixed, it would be helpful to know
what's going wrong.

I am fairly certain that recursive functions are being leaked, even
after patching function object's traverse function to visit the
func_closure.

Jeremy



From jeremy at alum.mit.edu  Thu Mar  1 07:00:25 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 01:00:25 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects funcobject.c,2.35,2.36
In-Reply-To: <E14YMEZ-0006od-00@usw-pr-cvs1.sourceforge.net>
References: <E14YMEZ-0006od-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <15005.58745.306448.535530@w221.z064000254.bwi-md.dsl.cnc.net>

This change does not appear to solve the leaks, but it seems
necessary for correctness.

Jeremy



From martin at loewis.home.cs.tu-berlin.de  Thu Mar  1 07:16:59 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 1 Mar 2001 07:16:59 +0100
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
Message-ID: <200103010616.f216Gx301229@mira.informatik.hu-berlin.de>

> but where's the patch?

Argh. It's now at http://www.informatik.hu-berlin.de/~loewis/python/directive.diff

> other tools that parse Python will have to be adapted.

Yes, that's indeed a problem. Initially, that syntax will be used only
to denote modules that use nested scopes, so those tools would have
time to adjust.

> The __future__ hack doesn't need that.

If it is *just* parsing, then yes. If it does any further analysis
(e.g. "find definition (of a variable)" aka "find assignments to"), or
if they inspect code objects, these tools again need to be adopted.

Regards,
Martin




From thomas at xs4all.net  Thu Mar  1 08:29:09 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 08:29:09 +0100
Subject: [Python-Dev] Re: Case-sensitive import
In-Reply-To: <20010228214447.I252@dothill.com>; from Jason.Tishler@dothill.com on Wed, Feb 28, 2001 at 09:44:47PM -0500
References: <20010228151728.Q449@dothill.com> <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com> <20010228214447.I252@dothill.com>
Message-ID: <20010301082908.I9678@xs4all.nl>

On Wed, Feb 28, 2001 at 09:44:47PM -0500, Jason Tishler wrote:

[ Tim Peters ]
> > someday I hope to use Cygwin for more
> > than just running "patch" on this box <sigh> ...

> Be careful!  First, you may use grep occasionally.  Next, you may find
> yourself writing shell scripts.  Before you know it, you have crossed
> over to the Unix side.  You have been warned! :,)

Well, Tim used to be a true Jedi Knight, but was won over by the dark side.
His name keeps popping up in decidedly unixlike tools, like Emacs' 'python'
mode. It is certain that his defection brought balance to the force (or at
least to Python) but we'd still like to rescue him before he is forced to
sacrifice himself to save Python. ;)

Lets-just-call-him-anatim-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Thu Mar  1 12:57:08 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 1 Mar 2001 12:57:08 +0100
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
References: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de>  <200102282248.RAA31007@cj20424-a.reston1.va.home.com>
Message-ID: <02c901c0a246$bef128e0$0900a8c0@SPIFF>

Guido wrote:
> There's one downside to the "directive" syntax: other tools that parse
> Python will have to be adapted.  The __future__ hack doesn't need
> that.

also:

- "from __future__" gives a clear indication that you're using
  a non-standard feature.  "directive" is too generic.

- everyone knows how to mentally parse from-import state-
  ments, and that they may have side effects.  nobody knows
  what "directive" does.

- pragmas suck.  we need much more discussion (and calender
  time) before adding a pragma-like directive to Python.

- "from __future__" makes me smile.  "directive" doesn't.

-1, for now.

Cheers /F




From guido at digicool.com  Thu Mar  1 15:29:10 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 09:29:10 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 15:58:06 +1300."
             <200103010258.PAA02214@s454.cosc.canterbury.ac.nz> 
References: <200103010258.PAA02214@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103011429.JAA03471@cj20424-a.reston1.va.home.com>

> Quoth the Samuele Pedroni:
> 
> > In particular,
> > a global statement contained in an exec statement does not
> > affect the code block containing the exec statement, and code
> > contained in an exec statement is unaffected by global statements
> > in the code containing the exec statement.
> 
> I think this is broken. As long as we're going to allow
> exec-with-1-arg to implicitly mess with the current namespace,
> names in the exec'ed statement should have the same meanings
> as they do in the surrounding statically-compiled code.
> 
> So, global statements in the surrounding scope should be honoured
> in the exec'ed statement, and global statements should be disallowed
> within the exec'ed statement.
> 
> Better still, get rid of both exec-with-1-arg and locals()
> altogether...

That's my plan, so I suppose we should not bother to "fix" the broken
behavior that has been around from the start.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Thu Mar  1 15:55:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 09:55:01 -0500
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
In-Reply-To: Your message of "Thu, 01 Mar 2001 07:16:59 +0100."
             <200103010616.f216Gx301229@mira.informatik.hu-berlin.de> 
References: <200103010616.f216Gx301229@mira.informatik.hu-berlin.de> 
Message-ID: <200103011455.JAA04064@cj20424-a.reston1.va.home.com>

> Argh. It's now at http://www.informatik.hu-berlin.de/~loewis/python/directive.diff
> 
> > other tools that parse Python will have to be adapted.
> 
> Yes, that's indeed a problem. Initially, that syntax will be used only
> to denote modules that use nested scopes, so those tools would have
> time to adjust.
> 
> > The __future__ hack doesn't need that.
> 
> If it is *just* parsing, then yes. If it does any further analysis
> (e.g. "find definition (of a variable)" aka "find assignments to"), or
> if they inspect code objects, these tools again need to be adopted.

This is just too late for the first beta.  But we'll consider it for
beta 2!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pedroni at inf.ethz.ch  Thu Mar  1 16:33:14 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 16:33:14 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011533.QAA06035@core.inf.ethz.ch>

Hi.

I read the following CVS log from Jeremy:

> Fix core dump in example from Samuele Pedroni:
> 
> from __future__ import nested_scopes
> x=7
> def f():
>     x=1
>     def g():
>         global x
>         def i():
>             def h():
>                 return x
>             return h()
>         return i()
>     return g()
> 
> print f()
> print x
> 
> This kind of code didn't work correctly because x was treated as free
> in i, leading to an attempt to load x in g to make a closure for i.
> 
> Solution is to make global decl apply to nested scopes unless their is
> an assignment.  Thus, x in h is global.
> 

Will that be the intended final semantic?

The more backw-compatible semantic would be for that code to print:
1
7
(I think this was the semantic Guido, was thinking of)

Now, if I have understood well, this prints
7
7

but if I put a x=666 in h this prints:
666
7

but the most natural (just IMHO) nesting semantic would be in that case to
print:
666
666
(so x is considered global despite the assignement, because decl extends to
enclosed scopes too).

I have no preference but I'm confused. Samuele Pedroni.




From guido at digicool.com  Thu Mar  1 16:42:55 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 10:42:55 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: Your message of "Thu, 01 Mar 2001 05:56:42 PST."
             <E14YTZS-0003kB-00@usw-pr-cvs1.sourceforge.net> 
References: <E14YTZS-0003kB-00@usw-pr-cvs1.sourceforge.net> 
Message-ID: <200103011542.KAA04518@cj20424-a.reston1.va.home.com>

Ping just checked in this:

> Log Message:
> Add __author__ and __credits__ variables.
> 
> 
> Index: tokenize.py
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Lib/tokenize.py,v
> retrieving revision 1.19
> retrieving revision 1.20
> diff -C2 -r1.19 -r1.20
> *** tokenize.py	2001/03/01 04:27:19	1.19
> --- tokenize.py	2001/03/01 13:56:40	1.20
> ***************
> *** 10,14 ****
>   it produces COMMENT tokens for comments and gives type OP for all operators."""
>   
> ! __version__ = "Ka-Ping Yee, 26 October 1997; patched, GvR 3/30/98"
>   
>   import string, re
> --- 10,15 ----
>   it produces COMMENT tokens for comments and gives type OP for all operators."""
>   
> ! __author__ = 'Ka-Ping Yee <ping at lfw.org>'
> ! __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'
>   
>   import string, re

I'm slightly uncomfortable with the __credits__ variable inserted
here.  First of all, __credits__ doesn't really describe the
information given.  Second, doesn't this info belong in the CVS
history?  I'm not for including random extracts of a module's history
in the source code -- this is more likely than not to become out of
date.  (E.g. from the CVS log it's not clear why my contribution
deserves a mention while Tim's doesn't -- it looks like Tim probably
spent a lot more time thinking about it than I did.)

Anothor source of discomfort is that there's absolutely no standard
for this kind of meta-data variables.  We've got __version__, and I
believe we once agreed on that (in 1994 or so :-).  But __author__?
__credits__?  What next -- __cute_signoff__?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 17:10:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:10:28 -0500 (EST)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <200103011533.QAA06035@core.inf.ethz.ch>
References: <200103011533.QAA06035@core.inf.ethz.ch>
Message-ID: <15006.29812.95600.22223@w221.z064000254.bwi-md.dsl.cnc.net>

I'm not convinced there is a natural meaning for this, nor am I
certain that was is now implemented is the least unnatural meaning.

    from __future__ import nested_scopes
    x=7
    def f():
        x=1
        def g():
            global x
            def i():
                def h():
                    return x
                return h()
            return i()
        return g()
    
    print f()
    print x

prints:
    7
    7

I think the chief question is what 'global x' means without any other
reference to x in the same code block.  The other issue is whether a
global statement is a name binding operation of sorts.

If we had
        def g():
	    x = 2            # instead of global
            def i():
                def h():
                    return x
                return h()
            return i()

It is clear that x in h uses the binding introduced in g.

        def g():
            global x
	    x = 2
            def i():
                def h():
                    return x
                return h()
            return i()

Now that x is declared global, should the binding for x in g be
visible in h?  I think it should, because the alternative would be
more confusing.

    def f():
        x = 3
        def g():
            global x
	    x = 2
            def i():
                def h():
                    return x
                return h()
            return i()

If global x meant that the binding for x wasn't visible in nested
scopes, then h would use the binding for x introduced in f.  This is
confusing, because visual inspection shows that the nearest block with
an assignment to x is g.  (You might overlook the global x statement.)

The rule currently implemented is to use the binding introduced in the
nearest enclosing scope.  If the binding happens to be between the
name and the global namespace, that is the binding that is used.

Samuele noted that he thinks the most natural semantics would be for
global to extend into nested scopes.  I think this would be confusing
-- or at least I'm confused <wink>.  

        def g():
            global x
	    x = 2
            def i():
                def h():
                    x = 10
                    return x
                return h()
            return i()

In this case, he suggests that the assignment in h should affect the
global x.  I think this is incorrect because enclosing scopes should
only have an effect when variables are free.  By the normal Python
rules, x is not free in h because there is an assignment to x; x is
just a local.

Jeremy



From ping at lfw.org  Thu Mar  1 17:13:56 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 08:13:56 -0800 (PST)
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <200103011542.KAA04518@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org>

On Thu, 1 Mar 2001, Guido van Rossum wrote:
> I'm slightly uncomfortable with the __credits__ variable inserted
> here.  First of all, __credits__ doesn't really describe the
> information given.

I'll explain the motivation here.  I was going to write something
about this when i got up in the morning, but you've noticed before
i got around to it (and i haven't gone to sleep yet).

    - The __version__ variable really wasn't a useful place for
      this information.  The version of something really isn't
      the same as the author or the date it was created; it should
      be either a revision number from an RCS tag or a number
      updated periodically by the maintainer.  By separating out
      other kinds of information, we allow __version__ to retain
      its focused purpose.

    - The __author__ tag is a pretty standard piece of metadata
      among most kinds of documentation -- there are AUTHOR
      sections in almost all man pages, and similar "creator"
      information in just about every metadata standard for
      documents or work products of any kind.  Contact info and
      copyright info can go here.  This is important because it
      identifies a responsible party -- someone to ask questions
      of, and to send complaints, thanks, and patches to.  Maybe
      one day we can use it to help automate the process of
      assigning patches and directing feedback.

    - The __credits__ tag is a way of acknowledging others who
      contributed to the product.  It can be used to recount a
      little history, but the real motivation for including it
      is social engineering: i wanted to foster a stronger mutual
      gratification culture around Python by giving people a place
      to be generous with their acknowledgements.  It's always
      good to err on the side of generosity rather than stinginess
      when giving praise.  Open source is fueled in large part by
      egoboo, and if we can let everyone participate, peer-to-peer
      style rather than centralized, in patting others on the back,
      then all the better.  People do this in # comments anyway;
      the only difference now is that their notes are visible to pydoc.

> Second, doesn't this info belong in the CVS history?

__credits__ isn't supposed to be a change log; it's a reward
mechanism.  Or consider it ego-Napster, if you prefer.

Share the love. :)

> Anothor source of discomfort is that there's absolutely no standard
> for this kind of meta-data variables.

I think the behaviour of processing tools such as pydoc will
create a de-facto standard.  I was careful to respect __version__
in the ways that it is currently used, and i am humbly offering
these others in the hope that you will see why they are worth
having, too.



-- ?!ng

"If cryptography is outlawed, only QJVKN YFDLA ZBYCG HFUEG UFRYG..."




From guido at digicool.com  Thu Mar  1 17:30:53 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 11:30:53 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: Your message of "Thu, 01 Mar 2001 08:13:56 PST."
             <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org> 
References: <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org> 
Message-ID: <200103011630.LAA04973@cj20424-a.reston1.va.home.com>

> On Thu, 1 Mar 2001, Guido van Rossum wrote:
> > I'm slightly uncomfortable with the __credits__ variable inserted
> > here.  First of all, __credits__ doesn't really describe the
> > information given.

Ping replied:
> I'll explain the motivation here.  I was going to write something
> about this when i got up in the morning, but you've noticed before
> i got around to it (and i haven't gone to sleep yet).
> 
>     - The __version__ variable really wasn't a useful place for
>       this information.  The version of something really isn't
>       the same as the author or the date it was created; it should
>       be either a revision number from an RCS tag or a number
>       updated periodically by the maintainer.  By separating out
>       other kinds of information, we allow __version__ to retain
>       its focused purpose.

Sure.

>     - The __author__ tag is a pretty standard piece of metadata
>       among most kinds of documentation -- there are AUTHOR
>       sections in almost all man pages, and similar "creator"
>       information in just about every metadata standard for
>       documents or work products of any kind.  Contact info and
>       copyright info can go here.  This is important because it
>       identifies a responsible party -- someone to ask questions
>       of, and to send complaints, thanks, and patches to.  Maybe
>       one day we can use it to help automate the process of
>       assigning patches and directing feedback.

No problem here.

>     - The __credits__ tag is a way of acknowledging others who
>       contributed to the product.  It can be used to recount a
>       little history, but the real motivation for including it
>       is social engineering: i wanted to foster a stronger mutual
>       gratification culture around Python by giving people a place
>       to be generous with their acknowledgements.  It's always
>       good to err on the side of generosity rather than stinginess
>       when giving praise.  Open source is fueled in large part by
>       egoboo, and if we can let everyone participate, peer-to-peer
>       style rather than centralized, in patting others on the back,
>       then all the better.  People do this in # comments anyway;
>       the only difference now is that their notes are visible to pydoc.

OK.  Then I think you goofed up in the __credits__ you actually
checked in for tokenize.py:

    __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'

I would have expected something like this:

    __credits__ = 'contributions: GvR, ESR, Tim Peters, Thomas Wouters, ' \
                  'Fred Drake, Skip Montanaro'

> > Second, doesn't this info belong in the CVS history?
> 
> __credits__ isn't supposed to be a change log; it's a reward
> mechanism.  Or consider it ego-Napster, if you prefer.
> 
> Share the love. :)

You west coasters. :-)

> > Anothor source of discomfort is that there's absolutely no standard
> > for this kind of meta-data variables.
> 
> I think the behaviour of processing tools such as pydoc will
> create a de-facto standard.  I was careful to respect __version__
> in the ways that it is currently used, and i am humbly offering
> these others in the hope that you will see why they are worth
> having, too.

What does pydoc do with __credits__?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 17:37:53 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:37:53 -0500 (EST)
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>
References: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>
Message-ID: <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "RT" == Robin Thomas <robin.thomas at starmedia.net> writes:

  RT> Using Python 2.0 on Win32. Am I the only person to be depressed
  RT> by the following behavior now that __getitem__ does the work of
  RT> __getslice__?

You may the only person to have tried it :-).

  RT> Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
  >>> d = {}
  >>> d[0:1] = 1
  >>> d
  {slice(0, 1, None): 1}

I think this should raise a TypeError (as you suggested later).

>>> del d[0:1]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: object doesn't support slice deletion

Jeremy



From pedroni at inf.ethz.ch  Thu Mar  1 17:53:43 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 17:53:43 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011653.RAA09025@core.inf.ethz.ch>

Hi.

Your rationale sounds ok.
We are just facing the oddities of the python rule - that assignment
indetifies locals - when extended to nested scopes new world.
(Everybody will be confused his own way ;), better write non confusing
code ;))
I think I should really learn to read code this way, and also
everybody coming from languages with explicit declarations:

is the semantic (expressed through bytecode instrs) right?

(I)
    from __future__ import nested_scopes
    x=7
    def f():
        #pseudo-local-decl x
        x=1
        def g():
            global x # global-decl x
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
        return g()
    
    print f()
    print x

(II)
        def g():
            #pseudo-local-decl x
	    x = 2            # instead of global
            def i():
                def h():
                    return x # => LOAD_DEREF (x from g)
                return h()
            return i()

(III)
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
(IV)           
    def f():
        # pseudo-local-decl x
        x = 3 # => STORE_FAST
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
(IV)
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    # pseudo-local-decl x
                    x = 10   # => STORE_FAST
                    return x # => LOAD_FAST
                return h()
            return i()
If one reads also here the implicit local-decl, this is fine, otherwise this 
is confusing. It's a matter whether 'global' kills the local-decl only in one
scope or in the nesting too. I have no preference.


regards, Samuele Pedroni.




From jeremy at alum.mit.edu  Thu Mar  1 17:57:20 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:57:20 -0500 (EST)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <200103011653.RAA09025@core.inf.ethz.ch>
References: <200103011653.RAA09025@core.inf.ethz.ch>
Message-ID: <15006.32624.826559.907667@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "SP" == Samuele Pedroni <pedroni at inf.ethz.ch> writes:

  SP> If one reads also here the implicit local-decl, this is fine,
  SP> otherwise this is confusing. It's a matter whether 'global'
  SP> kills the local-decl only in one scope or in the nesting too. I
  SP> have no preference.

All your examples look like what is currently implemented.  My
preference is that global kills the local-decl only in one scope.
I'll stick with that unless Guido disagrees.

Jeremy



From pedroni at inf.ethz.ch  Thu Mar  1 18:04:56 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 18:04:56 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011704.SAA09425@core.inf.ethz.ch>

[Jeremy] 
> All your examples look like what is currently implemented.  My
> preference is that global kills the local-decl only in one scope.
> I'll stick with that unless Guido disagrees.
At least this will break fewer code.

regards.




From ping at lfw.org  Thu Mar  1 18:11:28 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 09:11:28 -0800 (PST)
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <200103011630.LAA04973@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10103010909520.862-100000@skuld.kingmanhall.org>

On Thu, 1 Mar 2001, Guido van Rossum wrote:
> OK.  Then I think you goofed up in the __credits__ you actually
> checked in for tokenize.py:
> 
>     __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'

Indeed, that was mindless copying.

> I would have expected something like this:
> 
>     __credits__ = 'contributions: GvR, ESR, Tim Peters, Thomas Wouters, ' \
>                   'Fred Drake, Skip Montanaro'

Sure.  Done.

> You west coasters. :-)

You forget that i'm a Canadian prairie boy at heart. :)

> What does pydoc do with __credits__?

They show up in a little section at the end of the document.


-- ?!ng

"If cryptography is outlawed, only QJVKN YFDLA ZBYCG HFUEG UFRYG..."




From esr at thyrsus.com  Thu Mar  1 18:47:51 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Thu, 1 Mar 2001 12:47:51 -0500
Subject: [Python-Dev] Finger error -- my apologies
Message-ID: <20010301124751.B24835@thyrsus.com>

I meant to accept this patch, but I think I rejected it instead.
Sorry, Ping.  Resubmit, plese, if I fooed up?
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

It is the assumption of this book that a work of art is a gift, not a
commodity.  Or, to state the modern case with more precision, that works of
art exist simultaneously in two "economies," a market economy and a gift
economy.  Only one of these is essential, however: a work of art can survive
without the market, but where there is no gift there is no art.
	-- Lewis Hyde, The Gift: Imagination and the Erotic Life of Property
-------------- next part --------------
An embedded message was scrubbed...
From: nobody <nobody at sourceforge.net>
Subject: [ python-Patches-405122 ] webbrowser fix
Date: Thu, 01 Mar 2001 06:03:54 -0800
Size: 2012
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010301/e4473d2d/attachment.eml>

From jeremy at alum.mit.edu  Thu Mar  1 19:16:03 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 13:16:03 -0500 (EST)
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
Message-ID: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>

from __future__ import nested_scopes is accepted at the interactive
interpreter prompt but has no effect beyond the line on which it was
entered.  You could use it with lambdas entered following a
semicolon, I guess.

I would rather see the future statement take effect for the remained
of the interactive interpreter session.  I have included a first-cut
patch below that makes this possible, using an object called
PySessionState.  (I don't like the name, but don't have a better one;
PyCompilerFlags?)

The idea of the session state is to record information about the state
of an interactive session that may affect compilation.  The
state object is created in PyRun_InteractiveLoop() and passed all the
way through to PyNode_Compile().

Does this seem a reasonable approach?  Should I include it in the
beta?  Any name suggestions.

Jeremy


Index: Include/compile.h
===================================================================
RCS file: /cvsroot/python/python/dist/src/Include/compile.h,v
retrieving revision 2.27
diff -c -r2.27 compile.h
*** Include/compile.h	2001/02/28 01:58:08	2.27
--- Include/compile.h	2001/03/01 18:18:27
***************
*** 41,47 ****
  
  /* Public interface */
  struct _node; /* Declare the existence of this type */
! DL_IMPORT(PyCodeObject *) PyNode_Compile(struct _node *, char *);
  DL_IMPORT(PyCodeObject *) PyCode_New(
  	int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *,
  	PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *); 
--- 41,48 ----
  
  /* Public interface */
  struct _node; /* Declare the existence of this type */
! DL_IMPORT(PyCodeObject *) PyNode_Compile(struct _node *, char *,
! 					 PySessionState *);
  DL_IMPORT(PyCodeObject *) PyCode_New(
  	int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *,
  	PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *); 
Index: Include/pythonrun.h
===================================================================
RCS file: /cvsroot/python/python/dist/src/Include/pythonrun.h,v
retrieving revision 2.38
diff -c -r2.38 pythonrun.h
*** Include/pythonrun.h	2001/02/02 18:19:15	2.38
--- Include/pythonrun.h	2001/03/01 18:18:27
***************
*** 7,12 ****
--- 7,16 ----
  extern "C" {
  #endif
  
+ typedef struct {
+ 	int ss_nested_scopes;
+ } PySessionState;
+ 
  DL_IMPORT(void) Py_SetProgramName(char *);
  DL_IMPORT(char *) Py_GetProgramName(void);
  
***************
*** 25,31 ****
  DL_IMPORT(int) PyRun_SimpleString(char *);
  DL_IMPORT(int) PyRun_SimpleFile(FILE *, char *);
  DL_IMPORT(int) PyRun_SimpleFileEx(FILE *, char *, int);
! DL_IMPORT(int) PyRun_InteractiveOne(FILE *, char *);
  DL_IMPORT(int) PyRun_InteractiveLoop(FILE *, char *);
  
  DL_IMPORT(struct _node *) PyParser_SimpleParseString(char *, int);
--- 29,35 ----
  DL_IMPORT(int) PyRun_SimpleString(char *);
  DL_IMPORT(int) PyRun_SimpleFile(FILE *, char *);
  DL_IMPORT(int) PyRun_SimpleFileEx(FILE *, char *, int);
! DL_IMPORT(int) PyRun_InteractiveOne(FILE *, char *, PySessionState *);
  DL_IMPORT(int) PyRun_InteractiveLoop(FILE *, char *);
  
  DL_IMPORT(struct _node *) PyParser_SimpleParseString(char *, int);
Index: Python/compile.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/compile.c,v
retrieving revision 2.184
diff -c -r2.184 compile.c
*** Python/compile.c	2001/03/01 06:09:34	2.184
--- Python/compile.c	2001/03/01 18:18:28
***************
*** 471,477 ****
  static void com_assign(struct compiling *, node *, int, node *);
  static void com_assign_name(struct compiling *, node *, int);
  static PyCodeObject *icompile(node *, struct compiling *);
! static PyCodeObject *jcompile(node *, char *, struct compiling *);
  static PyObject *parsestrplus(node *);
  static PyObject *parsestr(char *);
  static node *get_rawdocstring(node *);
--- 471,478 ----
  static void com_assign(struct compiling *, node *, int, node *);
  static void com_assign_name(struct compiling *, node *, int);
  static PyCodeObject *icompile(node *, struct compiling *);
! static PyCodeObject *jcompile(node *, char *, struct compiling *,
! 			      PySessionState *);
  static PyObject *parsestrplus(node *);
  static PyObject *parsestr(char *);
  static node *get_rawdocstring(node *);
***************
*** 3814,3822 ****
  }
  
  PyCodeObject *
! PyNode_Compile(node *n, char *filename)
  {
! 	return jcompile(n, filename, NULL);
  }
  
  struct symtable *
--- 3815,3823 ----
  }
  
  PyCodeObject *
! PyNode_Compile(node *n, char *filename, PySessionState *sess)
  {
! 	return jcompile(n, filename, NULL, sess);
  }
  
  struct symtable *
***************
*** 3844,3854 ****
  static PyCodeObject *
  icompile(node *n, struct compiling *base)
  {
! 	return jcompile(n, base->c_filename, base);
  }
  
  static PyCodeObject *
! jcompile(node *n, char *filename, struct compiling *base)
  {
  	struct compiling sc;
  	PyCodeObject *co;
--- 3845,3856 ----
  static PyCodeObject *
  icompile(node *n, struct compiling *base)
  {
! 	return jcompile(n, base->c_filename, base, NULL);
  }
  
  static PyCodeObject *
! jcompile(node *n, char *filename, struct compiling *base,
! 	 PySessionState *sess)
  {
  	struct compiling sc;
  	PyCodeObject *co;
***************
*** 3864,3870 ****
  	} else {
  		sc.c_private = NULL;
  		sc.c_future = PyNode_Future(n, filename);
! 		if (sc.c_future == NULL || symtable_build(&sc, n) < 0) {
  			com_free(&sc);
  			return NULL;
  		}
--- 3866,3882 ----
  	} else {
  		sc.c_private = NULL;
  		sc.c_future = PyNode_Future(n, filename);
! 		if (sc.c_future == NULL) {
! 			com_free(&sc);
! 			return NULL;
! 		}
! 		if (sess) {
! 			if (sess->ss_nested_scopes)
! 				sc.c_future->ff_nested_scopes = 1;
! 			else if (sc.c_future->ff_nested_scopes)
! 				sess->ss_nested_scopes = 1;
! 		}
! 		if (symtable_build(&sc, n) < 0) {
  			com_free(&sc);
  			return NULL;
  		}
Index: Python/import.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/import.c,v
retrieving revision 2.169
diff -c -r2.169 import.c
*** Python/import.c	2001/03/01 08:47:29	2.169
--- Python/import.c	2001/03/01 18:18:28
***************
*** 608,614 ****
  	n = PyParser_SimpleParseFile(fp, pathname, Py_file_input);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, pathname);
  	PyNode_Free(n);
  
  	return co;
--- 608,614 ----
  	n = PyParser_SimpleParseFile(fp, pathname, Py_file_input);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, pathname, NULL);
  	PyNode_Free(n);
  
  	return co;
Index: Python/pythonrun.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/pythonrun.c,v
retrieving revision 2.125
diff -c -r2.125 pythonrun.c
*** Python/pythonrun.c	2001/02/28 20:58:04	2.125
--- Python/pythonrun.c	2001/03/01 18:18:28
***************
*** 37,45 ****
  static void initmain(void);
  static void initsite(void);
  static PyObject *run_err_node(node *n, char *filename,
! 			      PyObject *globals, PyObject *locals);
  static PyObject *run_node(node *n, char *filename,
! 			  PyObject *globals, PyObject *locals);
  static PyObject *run_pyc_file(FILE *fp, char *filename,
  			      PyObject *globals, PyObject *locals);
  static void err_input(perrdetail *);
--- 37,47 ----
  static void initmain(void);
  static void initsite(void);
  static PyObject *run_err_node(node *n, char *filename,
! 			      PyObject *globals, PyObject *locals,
! 			      PySessionState *sess);
  static PyObject *run_node(node *n, char *filename,
! 			  PyObject *globals, PyObject *locals,
! 			  PySessionState *sess);
  static PyObject *run_pyc_file(FILE *fp, char *filename,
  			      PyObject *globals, PyObject *locals);
  static void err_input(perrdetail *);
***************
*** 56,62 ****
  extern void _PyCodecRegistry_Init(void);
  extern void _PyCodecRegistry_Fini(void);
  
- 
  int Py_DebugFlag; /* Needed by parser.c */
  int Py_VerboseFlag; /* Needed by import.c */
  int Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */
--- 58,63 ----
***************
*** 472,477 ****
--- 473,481 ----
  {
  	PyObject *v;
  	int ret;
+ 	PySessionState sess;
+ 
+ 	sess.ss_nested_scopes = 0;
  	v = PySys_GetObject("ps1");
  	if (v == NULL) {
  		PySys_SetObject("ps1", v = PyString_FromString(">>> "));
***************
*** 483,489 ****
  		Py_XDECREF(v);
  	}
  	for (;;) {
! 		ret = PyRun_InteractiveOne(fp, filename);
  #ifdef Py_REF_DEBUG
  		fprintf(stderr, "[%ld refs]\n", _Py_RefTotal);
  #endif
--- 487,493 ----
  		Py_XDECREF(v);
  	}
  	for (;;) {
! 		ret = PyRun_InteractiveOne(fp, filename, &sess);
  #ifdef Py_REF_DEBUG
  		fprintf(stderr, "[%ld refs]\n", _Py_RefTotal);
  #endif
***************
*** 497,503 ****
  }
  
  int
! PyRun_InteractiveOne(FILE *fp, char *filename)
  {
  	PyObject *m, *d, *v, *w;
  	node *n;
--- 501,507 ----
  }
  
  int
! PyRun_InteractiveOne(FILE *fp, char *filename, PySessionState *sess)
  {
  	PyObject *m, *d, *v, *w;
  	node *n;
***************
*** 537,543 ****
  	if (m == NULL)
  		return -1;
  	d = PyModule_GetDict(m);
! 	v = run_node(n, filename, d, d);
  	if (v == NULL) {
  		PyErr_Print();
  		return -1;
--- 541,547 ----
  	if (m == NULL)
  		return -1;
  	d = PyModule_GetDict(m);
! 	v = run_node(n, filename, d, d, sess);
  	if (v == NULL) {
  		PyErr_Print();
  		return -1;
***************
*** 907,913 ****
  PyRun_String(char *str, int start, PyObject *globals, PyObject *locals)
  {
  	return run_err_node(PyParser_SimpleParseString(str, start),
! 			    "<string>", globals, locals);
  }
  
  PyObject *
--- 911,917 ----
  PyRun_String(char *str, int start, PyObject *globals, PyObject *locals)
  {
  	return run_err_node(PyParser_SimpleParseString(str, start),
! 			    "<string>", globals, locals, NULL);
  }
  
  PyObject *
***************
*** 924,946 ****
  	node *n = PyParser_SimpleParseFile(fp, filename, start);
  	if (closeit)
  		fclose(fp);
! 	return run_err_node(n, filename, globals, locals);
  }
  
  static PyObject *
! run_err_node(node *n, char *filename, PyObject *globals, PyObject *locals)
  {
  	if (n == NULL)
  		return  NULL;
! 	return run_node(n, filename, globals, locals);
  }
  
  static PyObject *
! run_node(node *n, char *filename, PyObject *globals, PyObject *locals)
  {
  	PyCodeObject *co;
  	PyObject *v;
! 	co = PyNode_Compile(n, filename);
  	PyNode_Free(n);
  	if (co == NULL)
  		return NULL;
--- 928,957 ----
  	node *n = PyParser_SimpleParseFile(fp, filename, start);
  	if (closeit)
  		fclose(fp);
! 	return run_err_node(n, filename, globals, locals, NULL);
  }
  
  static PyObject *
! run_err_node(node *n, char *filename, PyObject *globals, PyObject *locals,
! 	     PySessionState *sess)
  {
  	if (n == NULL)
  		return  NULL;
! 	return run_node(n, filename, globals, locals, sess);
  }
  
  static PyObject *
! run_node(node *n, char *filename, PyObject *globals, PyObject *locals,
! 	 PySessionState *sess)
  {
  	PyCodeObject *co;
  	PyObject *v;
! 	if (sess) {
! 		fprintf(stderr, "session state: %d\n",
! 			sess->ss_nested_scopes);
! 	}
! 	/* XXX pass sess->ss_nested_scopes to PyNode_Compile */
! 	co = PyNode_Compile(n, filename, sess);
  	PyNode_Free(n);
  	if (co == NULL)
  		return NULL;
***************
*** 986,992 ****
  	n = PyParser_SimpleParseString(str, start);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, filename);
  	PyNode_Free(n);
  	return (PyObject *)co;
  }
--- 997,1003 ----
  	n = PyParser_SimpleParseString(str, start);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, filename, NULL);
  	PyNode_Free(n);
  	return (PyObject *)co;
  }



From guido at digicool.com  Thu Mar  1 19:34:53 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 13:34:53 -0500
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
In-Reply-To: Your message of "Thu, 01 Mar 2001 13:16:03 EST."
             <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103011834.NAA16957@cj20424-a.reston1.va.home.com>

> from __future__ import nested_scopes is accepted at the interactive
> interpreter prompt but has no effect beyond the line on which it was
> entered.  You could use it with lambdas entered following a
> semicolon, I guess.
> 
> I would rather see the future statement take effect for the remained
> of the interactive interpreter session.  I have included a first-cut
> patch below that makes this possible, using an object called
> PySessionState.  (I don't like the name, but don't have a better one;
> PyCompilerFlags?)
> 
> The idea of the session state is to record information about the state
> of an interactive session that may affect compilation.  The
> state object is created in PyRun_InteractiveLoop() and passed all the
> way through to PyNode_Compile().
> 
> Does this seem a reasonable approach?  Should I include it in the
> beta?  Any name suggestions.

I'm not keen on changing the prototypes for PyNode_Compile() and
PyRun_InteractiveOne().  I suspect that folks doing funky stuff might
be calling these directly.

Would it be a great pain to add ...Ex() versions that take a session
state, and have the old versions call this with a made-up dummy
session state?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Thu Mar  1 19:40:58 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 13:40:58 -0500
Subject: [Python-Dev] Finger error -- my apologies
In-Reply-To: Your message of "Thu, 01 Mar 2001 12:47:51 EST."
             <20010301124751.B24835@thyrsus.com> 
References: <20010301124751.B24835@thyrsus.com> 
Message-ID: <200103011840.NAA17088@cj20424-a.reston1.va.home.com>

> I meant to accept this patch, but I think I rejected it instead.
> Sorry, Ping.  Resubmit, plese, if I fooed up?

There's no need to resubmit -- you should be able to reset the state
any time.  I've changed it back to None so you can try again.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From esr at thyrsus.com  Thu Mar  1 19:58:57 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Thu, 1 Mar 2001 13:58:57 -0500
Subject: [Python-Dev] Finger error -- my apologies
In-Reply-To: <200103011840.NAA17088@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 01, 2001 at 01:40:58PM -0500
References: <20010301124751.B24835@thyrsus.com> <200103011840.NAA17088@cj20424-a.reston1.va.home.com>
Message-ID: <20010301135857.D25553@thyrsus.com>

Guido van Rossum <guido at digicool.com>:
> > I meant to accept this patch, but I think I rejected it instead.
> > Sorry, Ping.  Resubmit, plese, if I fooed up?
> 
> There's no need to resubmit -- you should be able to reset the state
> any time.  I've changed it back to None so you can try again.

Done.

I also discovered that I wasn't quite the idiot I thought I had been; I
actually tripped over an odd little misfeature of Mozilla that other 
people working the patch queue should know about.

I saw "Rejected" after I thought I had clicked "Accepted" and thought
I had made both a mouse error and a thinko...

What actually happened was I clicked "Accepted" and then tried to page down
my browser.  Unfortunately the choice field was still selected -- and
guess what the last status value in the pulldown menu is, and
what the PgDn key does! :-)

Others should beware of this...
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Our society won't be truly free until "None of the Above" is always an option.



From tim.one at home.com  Thu Mar  1 20:11:14 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 1 Mar 2001 14:11:14 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <Pine.LNX.4.10.10103010909520.862-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBGJDAA.tim.one@home.com>

OTOH, seeing my name in a __credits__ blurb does nothing for my ego, it makes
me involuntarily shudder at having yet another potential source of extremely
urgent personal email from strangers who can't read <0.9 wink>.

So the question is, should __credits__nanny.py look for its file of names to
rip out via a magically named file or via cmdline argument?

or-maybe-a-gui!-ly y'rs  - tim




From Greg.Wilson at baltimore.com  Thu Mar  1 20:21:13 2001
From: Greg.Wilson at baltimore.com (Greg Wilson)
Date: Thu, 1 Mar 2001 14:21:13 -0500 
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>

I'm working on Solaris, and have configured Python using
--with-cxx=g++.  I have a library "libenf.a", which depends
on several .so's (Eric Young's libeay and a couple of others).
I can't modify the library, but I'd like to wrap it so that
our QA group can write scripts to test it.

My C module was pretty simple to put together.  However, when
I load it, Python (or someone) complains that the symbols that
I know are in "libeay.so" are missing.  It's on LD_LIBRARY_PATH,
and "nm" shows that the symbols really are there.  So:

1. Do I have to do something special to allow Python to load
   .so's that extensions depend on?  If so, what?

2. Or do I have to load the .so myself prior to loading my
   extension?  If so, how?  Explicit "dlopen()" calls at the
   top of "init" don't work (presumably because the built-in
   loading has already decided that some symbols are missing).

Instead of offering a beer for the first correct answer this
time, I promise to write it up and send it to Fred Drake for
inclusion in the 2.1 release notes :-).

Thanks
Greg



From guido at digicool.com  Thu Mar  1 21:32:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 15:32:37 -0500
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: Your message of "Thu, 01 Mar 2001 11:37:53 EST."
             <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>  
            <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103012032.PAA18322@cj20424-a.reston1.va.home.com>

> >>>>> "RT" == Robin Thomas <robin.thomas at starmedia.net> writes:
> 
>   RT> Using Python 2.0 on Win32. Am I the only person to be depressed
>   RT> by the following behavior now that __getitem__ does the work of
>   RT> __getslice__?

Jeremy:
> You may the only person to have tried it :-).
> 
>   RT> Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
>   >>> d = {}
>   >>> d[0:1] = 1
>   >>> d
>   {slice(0, 1, None): 1}
> 
> I think this should raise a TypeError (as you suggested later).

Me too, but it's such an unusual corner-case that I can't worry about
it too much.  The problem has to do with being backwards compatible --
we couldn't add the 3rd argument to the slice API that we wanted.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 21:58:24 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 15:58:24 -0500 (EST)
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
In-Reply-To: <200103011834.NAA16957@cj20424-a.reston1.va.home.com>
References: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103011834.NAA16957@cj20424-a.reston1.va.home.com>
Message-ID: <15006.47088.256265.467786@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  GvR> I'm not keen on changing the prototypes for PyNode_Compile()
  GvR> and PyRun_InteractiveOne().  I suspect that folks doing funky
  GvR> stuff might be calling these directly.

  GvR> Would it be a great pain to add ...Ex() versions that take a
  GvR> session state, and have the old versions call this with a
  GvR> made-up dummy session state?

Doesn't seem like a big problem.  Any other issues with the approach?

Jeremy



From guido at digicool.com  Thu Mar  1 21:46:56 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 15:46:56 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: Your message of "Thu, 01 Mar 2001 17:53:43 +0100."
             <200103011653.RAA09025@core.inf.ethz.ch> 
References: <200103011653.RAA09025@core.inf.ethz.ch> 
Message-ID: <200103012046.PAA18395@cj20424-a.reston1.va.home.com>

> is the semantic (expressed through bytecode instrs) right?

Hi Samuele,

Thanks for bringing this up.  I agree with your predictions for these
examples, and have checked them in as part of the test_scope.py test
suite.  Fortunately Jeremy's code passes the test!

The rule is really pretty simple if you look at it through the right
glasses:

    To resolve a name, search from the inside out for either a scope
    that contains a global statement for that name, or a scope that
    contains a definition for that name (or both).

Thus, on the one hand the effect of a global statement is restricted
to the current scope, excluding nested scopes:

   def f():
       global x
       def g():
           x = 1 # new local

On the other hand, a name mentioned a global hides outer definitions
of the same name, and thus has an effect on nested scopes:

    def f():
       x = 1
       def g():
           global x
           def h():
               return x # global

We shouldn't code like this, but it's good to agree on what it should
mean when encountered!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 22:05:51 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 16:05:51 -0500 (EST)
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
Message-ID: <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>

> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
> 
> f()
> 
> prints 3, not 7.

I've been meaning to reply to your original post on this subject,
which actually addresses two different issues -- global and exec.  The
example above will fail with a SyntaxError in the nested_scopes
future, because of exec in the presence of a free variable.  The error
message is bad, because it says that exec is illegal in g because g
contains nested scopes.  I may not get to fix that before the beta.

The reasoning about the error here is, as usual with exec, that name
binding is a static or compile-time property of the program text.  The
use of hyper-dynamic features like import * and exec are not allowed
when they may interfere with static resolution of names.

Buy that?

Jeremy



From guido at digicool.com  Thu Mar  1 22:01:52 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 16:01:52 -0500
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: Your message of "Thu, 01 Mar 2001 15:54:55 EST."
             <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103012101.QAA18516@cj20424-a.reston1.va.home.com>

(Adding python-dev, keeping python-list)

> Quoth Robin Thomas <robin.thomas at starmedia.net>:
> | Using Python 2.0 on Win32. Am I the only person to be depressed by the 
> | following behavior now that __getitem__ does the work of __getslice__?
> |
> | Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
> |  >>> d = {}
> |  >>> d[0:1] = 1
> |  >>> d
> | {slice(0, 1, None): 1}
> |
> | And then, for more depression:
> |
> |  >>> d[0:1] = 2
> |  >>> d
> | {slice(0, 1, None): 1, slice(0, 1, None): 2}
> |
> | And then, for extra extra chagrin:
> |
> |  >>> print d[0:1]
> | Traceback (innermost last):
> |    File "<pyshell#11>", line 1, in ?
> |      d[0:1]
> | KeyError: slice(0, 1, None)
> 
> If it helps, you ruined my day.

Mine too. :-)

> | So, questions:
> |
> | 1) Is this behavior considered a bug by the BDFL or the community at large?

I can't speak for the community, but it smells like a bug to me.

> | If so, has a fix been conceived? Am I re-opening a long-resolved issue?

No, and no.

> | 2) If we're still open to proposed solutions, which of the following do you 
> | like:
> |
> |     a) make slices hash and cmp as their 3-tuple (start,stop,step),
> |        so that if I accidentally set a slice object as a key,
> |        I can at least re-set it or get it or del it :)

Good idea.  The SF patch manager is always open.

> |     b) have dict.__setitem__ expressly reject objects of SliceType
> |        as keys, raising your favorite in (TypeError, ValueError)

This is *also* a good idea.

> From: Donn Cave <donn at oz.net>
> 
> I think we might be able to do better.  I hacked in a quick fix
> in ceval.c that looks to me like it has the desired effect without
> closing the door to intentional slice keys (however unlikely.)
[...]
> *** Python/ceval.c.dist Thu Feb  1 14:48:12 2001
> --- Python/ceval.c      Wed Feb 28 21:52:55 2001
> ***************
> *** 3168,3173 ****
> --- 3168,3178 ----
>         /* u[v:w] = x */
>   {
>         int ilow = 0, ihigh = INT_MAX;
> +       if (u->ob_type->tp_as_mapping) {
> +               PyErr_SetString(PyExc_TypeError,
> +                       "dict object doesn't support slice assignment");
> +               return -1;
> +       }
>         if (!_PyEval_SliceIndex(v, &ilow))
>                 return -1;
>         if (!_PyEval_SliceIndex(w, &ihigh))

Alas, this isn't right.  It defeats the purpose completely: the whole
point was that you should be able to write a sequence class that
supports extended slices.  This uses __getitem__ and __setitem__, but
class instances have a nonzero tp_as_mapping pointer too!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Thu Mar  1 22:11:32 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 16:11:32 -0500
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>
Message-ID: <15006.47876.237152.882774@anthem.wooz.org>

>>>>> "GW" == Greg Wilson <Greg.Wilson at baltimore.com> writes:

    GW> I'm working on Solaris, and have configured Python using
    GW> --with-cxx=g++.  I have a library "libenf.a", which depends on
    GW> several .so's (Eric Young's libeay and a couple of others).  I
    GW> can't modify the library, but I'd like to wrap it so that our
    GW> QA group can write scripts to test it.

    GW> My C module was pretty simple to put together.  However, when
    GW> I load it, Python (or someone) complains that the symbols that
    GW> I know are in "libeay.so" are missing.  It's on
    GW> LD_LIBRARY_PATH, and "nm" shows that the symbols really are
    GW> there.  So:

    | 1. Do I have to do something special to allow Python to load
    |    .so's that extensions depend on?  If so, what?

Greg, it's been a while since I've worked on Solaris, but here's what
I remember.  This is all circa Solaris 2.5/2.6.

LD_LIBRARY_PATH only helps the linker find dynamic libraries at
compile/link time.  It's equivalent to the compiler's -L option.  It
does /not/ help the dynamic linker (ld.so) find your libraries at
run-time.  For that, you need LD_RUN_PATH or the -R option.  I'm of
the opinion that if you are specifying -L to the compiler, you should
always also specify -R, and that using -L/-R is always better than
LD_LIBRARY_PATH/LD_RUN_PATH (because the former is done by the person
doing the install and the latter is a burden imposed on all your
users).

There's an easy way to tell if your .so's are going to give you
problems.  Run `ldd mymodule.so' and see what the linker shows for the
dependencies.  If ldd can't find a dependency, it'll tell you,
otherwise, it show you the path to the dependent .so files.  If ldd
has a problem, you'll have a problem when you try to import it.

IIRC, distutils had a problem in this regard a while back, but these
days it seems to Just Work for me on Linux.  However, Linux is
slightly different in that there's a file /etc/ld.so.conf that you can
use to specify additional directories for ld.so to search at run-time,
so it can be fixed "after the fact".

    GW> Instead of offering a beer for the first correct answer this
    GW> time, I promise to write it up and send it to Fred Drake for
    GW> inclusion in the 2.1 release notes :-).

Oh no you don't!  You don't get off that easily.  See you next
week. :)

-Barry



From barry at digicool.com  Thu Mar  1 22:21:37 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 16:21:37 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
References: <200103011653.RAA09025@core.inf.ethz.ch>
	<200103012046.PAA18395@cj20424-a.reston1.va.home.com>
Message-ID: <15006.48481.807174.69908@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR>     To resolve a name, search from the inside out for either
    GvR> a scope that contains a global statement for that name, or a
    GvR> scope that contains a definition for that name (or both).

I think that's an excellent rule Guido -- hopefully it's captured
somewhere in the docs. :)  I think it yields behavior that both easily
discovered by visual code inspection and easily understood.

-Barry



From greg at cosc.canterbury.ac.nz  Thu Mar  1 22:54:45 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 02 Mar 2001 10:54:45 +1300 (NZDT)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <15006.32624.826559.907667@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103012154.KAA02307@s454.cosc.canterbury.ac.nz>

Jeremy:

> My preference is that global kills the local-decl only in one scope.

I agree, because otherwise there would be no way of
*undoing* the effect of a global in an outer scope.

The way things are, I can write a function

  def f():
    x = 3
    return x

and be assured that x will always be local, no matter what
environment I move the function into. I like this property.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Thu Mar  1 23:04:22 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 23:04:22 +0100
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
In-Reply-To: <15006.47876.237152.882774@anthem.wooz.org>; from barry@digicool.com on Thu, Mar 01, 2001 at 04:11:32PM -0500
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com> <15006.47876.237152.882774@anthem.wooz.org>
Message-ID: <20010301230422.M9678@xs4all.nl>

On Thu, Mar 01, 2001 at 04:11:32PM -0500, Barry A. Warsaw wrote:

>     | 1. Do I have to do something special to allow Python to load
>     |    .so's that extensions depend on?  If so, what?

> Greg, it's been a while since I've worked on Solaris, but here's what
> I remember.  This is all circa Solaris 2.5/2.6.

It worked the same way in SunOS 4.x, I believe.

> I'm of the opinion that if you are specifying -L to the compiler, you
> should always also specify -R, and that using -L/-R is always better than
> LD_LIBRARY_PATH/LD_RUN_PATH (because the former is done by the person
> doing the install and the latter is a burden imposed on all your users).

FWIW, I concur with the entire story. In my experience it's pretty
SunOS/Solaris specific (in fact, I long wondered why one of my C books spent
so much time explaining -R/-L, even though it wasn't necessary on my
platforms of choice at that time ;) but it might also apply to other
Solaris-inspired shared-library environments (HP-UX ? AIX ? IRIX ?)

> IIRC, distutils had a problem in this regard a while back, but these
> days it seems to Just Work for me on Linux.  However, Linux is
> slightly different in that there's a file /etc/ld.so.conf that you can
> use to specify additional directories for ld.so to search at run-time,
> so it can be fixed "after the fact".

BSDI uses the same /etc/ld.so.conf mechanism. However, LD_LIBRARY_PATH does
get used on linux, BSDI and IIRC FreeBSD as well, but as a runtime
environment variable. The /etc/ld.so.conf file gets compiled into a cache of
available libraries using 'ldconf'. On FreeBSD, there is no
'/etc/ld.so.conf' file; instead, you use 'ldconfig -m <path>' to add <path>
to the current cache, and add or modify the definition of
${ldconfig_path} in /etc/rc.conf. (which is used in the bootup procedure to
create a new cache, in case the old one was f'd up.)

I imagine OpenBSD and NetBSD are based off of FreeBSD, not BSDI. (BSDI was
late in adopting ELF, and obviously based most of it on Linux, for some
reason.)

I-wonder-how-it-works-on-Windows-ly y'rs,

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From barry at digicool.com  Thu Mar  1 23:12:27 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 17:12:27 -0500
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>
	<15006.47876.237152.882774@anthem.wooz.org>
	<20010301230422.M9678@xs4all.nl>
Message-ID: <15006.51531.427250.884726@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    >> Greg, it's been a while since I've worked on Solaris, but
    >> here's what I remember.  This is all circa Solaris 2.5/2.6.

    TW> It worked the same way in SunOS 4.x, I believe.

Ah, yes, I remember SunOS 4.x.  Remember SunOS 3.5 and earlier?  Or
even the Sun 1's?  :) NIST/NBS had at least one of those boxes still
rattling around when I left.  IIRC, it ran our old newserver for
years.

good-old-days-ly y'rs,
-Barry



From thomas at xs4all.net  Thu Mar  1 23:21:07 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 23:21:07 +0100
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: <200103012101.QAA18516@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 01, 2001 at 04:01:52PM -0500
References: <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net> <200103012101.QAA18516@cj20424-a.reston1.va.home.com>
Message-ID: <20010301232107.O9678@xs4all.nl>

On Thu, Mar 01, 2001 at 04:01:52PM -0500, Guido van Rossum wrote:
> > Quoth Robin Thomas <robin.thomas at starmedia.net>:

[ Dicts accept slice objects as keys in assignment, but not in retrieval ]

> > | 1) Is this behavior considered a bug by the BDFL or the community at large?

> I can't speak for the community, but it smells like a bug to me.

Speaking for the person who implemented the slice-fallback to sliceobjects:
yes, it's a bug, because it's an unintended consequence of the change :) The
intention was to eradicate the silly discrepancy between indexing, normal
slices and extended slices: normal indexing works through __getitem__,
sq_item and mp_subscript. Normal (two argument) slices work through
__getslice__ and sq_slice. Extended slices work through __getitem__, sq_item
and mp_subscript again.

Note, however, that though *this* particular bug is new in Python 2.0, it
wasn't actually absent in 1.5.2 either!

Python 1.5.2 (#0, Feb 20 2001, 23:57:58)  [GCC 2.95.3 20010125 (prerelease)]
on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> d = {}
>>> d[0:1] = "spam"
Traceback (innermost last):
  File "<stdin>", line 1, in ?
TypeError: object doesn't support slice assignment
>>> d[0:1:1] = "spam"
>>> d[0:1:] = "spam"
>>> d
{slice(0, 1, None): 'spam', slice(0, 1, 1): 'spam'}

The bug is just extended to cover normal slices as well, because the absense
of sq_slice now causes Python to fall back to normal item setting/retrieval.

I think making slices hashable objects makes the most sense. They can just
be treated as a three-tuple of the values in the slice, or some such.
Falling back to just sq_item/__getitem__ and not mp_subscript might make
some sense, but it seems a bit of an artificial split, since classes that
pretend to be mappings would be treated differently than types that pretend
to be mappings.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim.one at home.com  Thu Mar  1 23:37:35 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 1 Mar 2001 17:37:35 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <15006.48481.807174.69908@anthem.wooz.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com>

> >>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:
>
>     GvR>     To resolve a name, search from the inside out for either
>     GvR> a scope that contains a global statement for that name, or a
>     GvR> scope that contains a definition for that name (or both).
>
[Barry A. Warsaw]
> I think that's an excellent rule Guido --

Hmm.  After an hour of consideration, I would agree, provided only that the
rule also say you *stop* upon finding the first one <wink>.

> hopefully it's captured somewhere in the docs. :)

The python-dev archives are incorporated into the docs by implicit reference.

you-found-it-you-fix-it-ly y'rs  - tim




From martin at loewis.home.cs.tu-berlin.de  Thu Mar  1 23:39:01 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 1 Mar 2001 23:39:01 +0100
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
Message-ID: <200103012239.f21Md1i01641@mira.informatik.hu-berlin.de>

> I have a library "libenf.a", which depends on several .so's (Eric
> Young's libeay and a couple of others).

> My C module was pretty simple to put together.  However, when I load
> it, Python (or someone) complains that the symbols that I know are
> in "libeay.so" are missing.

If it says that the symbols are missing, it is *not* a problem of
LD_LIBRARY_PATH, LD_RUN_PATH (I can't find documentation or mentioning
of that variable anywhere...), or the -R option.

Instead, the most likely cause is that you forgot to link the .so when
linking the extension module. I.e. you should do

gcc -o foomodule.so foomodule.o -lenf -leay

If you omit the -leay, you get a shared object which will report
missing symbols when being loaded, except when the shared library was
loaded already for some other reason.

If you *did* specify -leay, it still might be that the symbols are not
available in the shared library. You said that nm displayed them, but
will nm still display them after you applied strip(1) to the library?
To see the symbols found by ld.so.1, you need to use the -D option of
nm(1).

Regards,
Martin



From jeremy at alum.mit.edu  Fri Mar  2 00:34:44 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 18:34:44 -0500 (EST)
Subject: [Python-Dev] nested scopes and future status
Message-ID: <15006.56468.16421.206413@w221.z064000254.bwi-md.dsl.cnc.net>

There are several loose ends in the nested scopes changes that I won't
have time to fix before the beta.  Here's a laundry list of tasks that
remain.  I don't think any of these is crucial for the release.
Holler if there's something you'd like me to fix tonight.

- Integrate the parsing of future statements into the _symtable
  module's interface.  This interface is new with 2.1 and
  undocumented, so it's deficiency here will not affect any code.

- Update traceback.py to understand SyntaxErrors that have a text
  attribute and an offset of None.  It should not print the caret.

- PyErr_ProgramText() should be called when an exception is printed
  rather than when it is raised.

- Fix pdb to support nested scopes.

- Produce a better error message/warning for code like this:
  def f(x):
      def g():
          exec ...
          print x
  The warning message should say that exec is not allowed in a nested
  function with free variables.  It currently says that g *contains* a
  nested function with free variables.

- Update the documentation.

Jeremy



From pedroni at inf.ethz.ch  Fri Mar  2 00:22:20 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Fri, 2 Mar 2001 00:22:20 +0100
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com><15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net><000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <004101c0a2a6$781cd440$f979fea9@newmexico>

Hi.


> > x=7
> > def f():
> >   global x
> >   def g():
> >     exec "x=3"
> >     return x
> >   print g()
> > 
> > f()
> > 
> > prints 3, not 7.
> 
> I've been meaning to reply to your original post on this subject,
> which actually addresses two different issues -- global and exec.  The
> example above will fail with a SyntaxError in the nested_scopes
> future, because of exec in the presence of a free variable.  The error
> message is bad, because it says that exec is illegal in g because g
> contains nested scopes.  I may not get to fix that before the beta.
> 
> The reasoning about the error here is, as usual with exec, that name
> binding is a static or compile-time property of the program text.  The
> use of hyper-dynamic features like import * and exec are not allowed
> when they may interfere with static resolution of names.
> 
> Buy that?
Yes I buy that. (I had tried it with the old a2)
So also this code will raise an error or I'm not understanding the point
and the error happens because of the global decl?

# top-level
def g():
  exec "x=3"
  return x

For me is ok, but that kills many naive uses of exec, I'm wondering if it 
does not make more sense to directly take the next big step and issue
an error (under future nested_scopes) for *all* the uses of exec without in.
Because every use of a builtin will cause the error...

regards




From jeremy at alum.mit.edu  Fri Mar  2 00:22:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 18:22:28 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <004101c0a2a6$781cd440$f979fea9@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
	<15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
	<004101c0a2a6$781cd440$f979fea9@newmexico>
Message-ID: <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "SP" == Samuele Pedroni <pedroni at inf.ethz.ch> writes:

  SP> # top-level
  SP> def g():
  SP>   exec "x=3" 
  SP>   return x

At the top-level, there is no closure created by the enclosing scope
is not a function scope.  I believe that's the right thing to do,
except that the exec "x=3" also assign to the global.

I'm not sure if there is a strong justification for allowing this
form, except that it is the version of exec that is most likely to
occur in legacy code.

Jeremy



From guido at digicool.com  Fri Mar  2 03:17:38 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:17:38 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: Your message of "Thu, 01 Mar 2001 17:37:35 EST."
             <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com> 
Message-ID: <200103020217.VAA19891@cj20424-a.reston1.va.home.com>

> > >>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:
> >
> >     GvR>     To resolve a name, search from the inside out for either
> >     GvR> a scope that contains a global statement for that name, or a
> >     GvR> scope that contains a definition for that name (or both).
> >
> [Barry A. Warsaw]
> > I think that's an excellent rule Guido --
> 
> Hmm.  After an hour of consideration,

That's quick -- it took me longer than that to come to the conclusion
that Jeremy had actually done the right thing. :-)

> I would agree, provided only that the
> rule also say you *stop* upon finding the first one <wink>.
> 
> > hopefully it's captured somewhere in the docs. :)
> 
> The python-dev archives are incorporated into the docs by implicit reference.
> 
> you-found-it-you-fix-it-ly y'rs  - tim

I'm sure the docs can stand some updates after the 2.1b1 crunch is
over to document what all we did.  After the conference!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar  2 03:35:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:35:01 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 18:22:28 EST."
             <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico>  
            <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103020235.VAA22273@cj20424-a.reston1.va.home.com>

> >>>>> "SP" == Samuele Pedroni <pedroni at inf.ethz.ch> writes:
> 
>   SP> # top-level
>   SP> def g():
>   SP>   exec "x=3" 
>   SP>   return x
> 
> At the top-level, there is no closure created by the enclosing scope
> is not a function scope.  I believe that's the right thing to do,
> except that the exec "x=3" also assign to the global.
> 
> I'm not sure if there is a strong justification for allowing this
> form, except that it is the version of exec that is most likely to
> occur in legacy code.

Unfortunately this used to work, using a gross hack: when an exec (or
import *) was present inside a function, the namespace semantics *for
that function* was changed to the pre-0.9.1 semantics, where all names
are looked up *at run time* first in the locals then in the globals
and then in the builtins.

I don't know how common this is -- it's pretty fragile.  If there's a
great clamor, we can put this behavior back after b1 is released.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar  2 03:43:34 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:43:34 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 21:35:01 EST."
             <200103020235.VAA22273@cj20424-a.reston1.va.home.com> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>  
            <200103020235.VAA22273@cj20424-a.reston1.va.home.com> 
Message-ID: <200103020243.VAA24384@cj20424-a.reston1.va.home.com>

> >   SP> # top-level
> >   SP> def g():
> >   SP>   exec "x=3" 
> >   SP>   return x

[me]
> Unfortunately this used to work, using a gross hack: when an exec (or
> import *) was present inside a function, the namespace semantics *for
> that function* was changed to the pre-0.9.1 semantics, where all names
> are looked up *at run time* first in the locals then in the globals
> and then in the builtins.
> 
> I don't know how common this is -- it's pretty fragile.  If there's a
> great clamor, we can put this behavior back after b1 is released.

I spoke too soon.  It just works in the latest 2.1b1.  Or am I missing
something?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From ping at lfw.org  Fri Mar  2 03:50:41 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 18:50:41 -0800 (PST)
Subject: [Python-Dev] Re: Is outlawing-nested-import-* only an implementation issue?
In-Reply-To: <14998.33979.566557.956297@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <Pine.LNX.4.10.10102241727410.13155-100000@localhost>

On Fri, 23 Feb 2001, Jeremy Hylton wrote:
> I think the meaning of print x should be statically determined.  That
> is, the programmer should be able to determine the binding environment
> in which x will be resolved (for print x) by inspection of the code.

I haven't had time in a while to follow up on this thread, but i just
wanted to say that i think this is a reasonable and sane course of
action.  I see the flaws in the model i was advocating, and i'm sorry
for consuming all that time in the discussion.


-- ?!ng


Post Scriptum:

On Fri, 23 Feb 2001, Jeremy Hylton wrote:
>   KPY> I tried STk Scheme, guile, and elisp, and they all do this.
> 
> I guess I'm just dense then.  Can you show me an example?

The example is pretty much exactly what you wrote:

    (define (f)
        (eval '(define y 2))
        y)

It produced 2.

But several sources have confirmed that this is just bad implementation
behaviour, so i'm willing to consider that a red herring.  Believe it
or not, in some Schemes, the following actually happens!

            STk> (define x 1)
            x
            STk> (define (func flag)
                     (if flag (define x 2))
                     (lambda () (set! x 3)))
            func
            STk> ((func #t))
            STk> x
            1
            STk> ((func #f))
            STk> x
            3

More than one professor that i showed the above to screamed.





From jeremy at alum.mit.edu  Fri Mar  2 02:12:37 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 20:12:37 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <200103020243.VAA24384@cj20424-a.reston1.va.home.com>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
	<15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
	<004101c0a2a6$781cd440$f979fea9@newmexico>
	<15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103020235.VAA22273@cj20424-a.reston1.va.home.com>
	<200103020243.VAA24384@cj20424-a.reston1.va.home.com>
Message-ID: <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  >> >   SP> # top-level
  >> >   SP> def g():
  >> >   SP>   exec "x=3" return x

  GvR> [me]
  >> Unfortunately this used to work, using a gross hack: when an exec
  >> (or import *) was present inside a function, the namespace
  >> semantics *for that function* was changed to the pre-0.9.1
  >> semantics, where all names are looked up *at run time* first in
  >> the locals then in the globals and then in the builtins.
  >>
  >> I don't know how common this is -- it's pretty fragile.  If
  >> there's a great clamor, we can put this behavior back after b1 is
  >> released.

  GvR> I spoke too soon.  It just works in the latest 2.1b1.  Or am I
  GvR> missing something?

The nested scopes rules don't kick in until you've got one function
nested in another.  The top-level namespace is treated differently
that other function namespaces.  If a function is defined at the
top-level then all its free variables are globals.  As a result, the
old rules still apply.

Since class scopes are ignored for nesting, methods defined in
top-level classes are handled the same way.

I'm not completely sure this makes sense, although it limits code
breakage; most functions are defined at the top-level or in classes!
I think it is fairly clear, though.

Jeremy



From guido at digicool.com  Fri Mar  2 04:04:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 22:04:19 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 20:12:37 EST."
             <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> <200103020235.VAA22273@cj20424-a.reston1.va.home.com> <200103020243.VAA24384@cj20424-a.reston1.va.home.com>  
            <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103020304.WAA24620@cj20424-a.reston1.va.home.com>

>   >> >   SP> # top-level
>   >> >   SP> def g():
>   >> >   SP>   exec "x=3" return x
> 
>   GvR> [me]
>   >> Unfortunately this used to work, using a gross hack: when an exec
>   >> (or import *) was present inside a function, the namespace
>   >> semantics *for that function* was changed to the pre-0.9.1
>   >> semantics, where all names are looked up *at run time* first in
>   >> the locals then in the globals and then in the builtins.
>   >>
>   >> I don't know how common this is -- it's pretty fragile.  If
>   >> there's a great clamor, we can put this behavior back after b1 is
>   >> released.
> 
>   GvR> I spoke too soon.  It just works in the latest 2.1b1.  Or am I
>   GvR> missing something?
> 
> The nested scopes rules don't kick in until you've got one function
> nested in another.  The top-level namespace is treated differently
> that other function namespaces.  If a function is defined at the
> top-level then all its free variables are globals.  As a result, the
> old rules still apply.

This doesn't make sense.  If the free variables were truely considered
globals, the reference to x would raise a NameError, because the exec
doesn't define it at the global level -- it defines it at the local
level.  So apparently you are generating LOAD_NAME instead of
LOAD_GLOBAL for free variables in toplevel functions.  Oh well, this
does the job!

> Since class scopes are ignored for nesting, methods defined in
> top-level classes are handled the same way.
> 
> I'm not completely sure this makes sense, although it limits code
> breakage; most functions are defined at the top-level or in classes!
> I think it is fairly clear, though.

Yeah, it's pretty unlikely that there will be much code breakage of
this form:

def f():
    def g():
        exec "x = 1"
        return x

(Hm, trying this I see that it generates a warning, but with the wrong
filename.  I'll see if I can use symtable_warn() here.)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Fri Mar  2 02:31:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 20:31:28 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <200103020304.WAA24620@cj20424-a.reston1.va.home.com>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
	<15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
	<004101c0a2a6$781cd440$f979fea9@newmexico>
	<15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103020235.VAA22273@cj20424-a.reston1.va.home.com>
	<200103020243.VAA24384@cj20424-a.reston1.va.home.com>
	<15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103020304.WAA24620@cj20424-a.reston1.va.home.com>
Message-ID: <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  >> The nested scopes rules don't kick in until you've got one
  >> function nested in another.  The top-level namespace is treated
  >> differently that other function namespaces.  If a function is
  >> defined at the top-level then all its free variables are globals.
  >> As a result, the old rules still apply.

  GvR> This doesn't make sense.  If the free variables were truely
  GvR> considered globals, the reference to x would raise a NameError,
  GvR> because the exec doesn't define it at the global level -- it
  GvR> defines it at the local level.  So apparently you are
  GvR> generating LOAD_NAME instead of LOAD_GLOBAL for free variables
  GvR> in toplevel functions.  Oh well, this does the job!

Actually, I only generate LOAD_NAME for unoptimized, top-level
function namespaces.  These are exactly the old rules and I avoided
changing them for top-level functions, except when they contained a
nested function.

If we eliminate exec without "in," this is yet another problem that
goes away.

Jeremy



From guido at digicool.com  Fri Mar  2 05:07:16 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 23:07:16 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 20:31:28 EST."
             <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> <200103020235.VAA22273@cj20424-a.reston1.va.home.com> <200103020243.VAA24384@cj20424-a.reston1.va.home.com> <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net> <200103020304.WAA24620@cj20424-a.reston1.va.home.com>  
            <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103020407.XAA30061@cj20424-a.reston1.va.home.com>

[Jeremy]
>   >> The nested scopes rules don't kick in until you've got one
>   >> function nested in another.  The top-level namespace is treated
>   >> differently that other function namespaces.  If a function is
>   >> defined at the top-level then all its free variables are globals.
>   >> As a result, the old rules still apply.
> 
>   GvR> This doesn't make sense.  If the free variables were truely
>   GvR> considered globals, the reference to x would raise a NameError,
>   GvR> because the exec doesn't define it at the global level -- it
>   GvR> defines it at the local level.  So apparently you are
>   GvR> generating LOAD_NAME instead of LOAD_GLOBAL for free variables
>   GvR> in toplevel functions.  Oh well, this does the job!

[Jeremy]
> Actually, I only generate LOAD_NAME for unoptimized, top-level
> function namespaces.  These are exactly the old rules and I avoided
> changing them for top-level functions, except when they contained a
> nested function.

Aha.

> If we eliminate exec without "in," this is yet another problem that
> goes away.

But that's for another release...  That will probably get a lot of
resistance from some category of users!

So it's fine for now.  Thanks, Jeremy!  Great job!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at effbot.org  Fri Mar  2 09:35:59 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Fri, 2 Mar 2001 09:35:59 +0100
Subject: [Python-Dev] a small C style question
Message-ID: <05f101c0a2f3$cf4bae10$e46940d5@hagrid>

DEC's OpenVMS compiler are a bit pickier than most other compilers.
among other things, it correctly notices that the "code" variable in
this statement is an unsigned variable:

    UNICODEDATA:

        if (code < 0 || code >= 65536)
    ........^
    %CC-I-QUESTCOMPARE, In this statement, the unsigned 
    expression "code" is being compared with a relational
    operator to a constant whose value is not greater than
    zero.  This might not be what you intended.
    at line number 285 in file UNICODEDATA.C

the easiest solution would of course be to remove the "code < 0"
part, but code is a Py_UCS4 variable.  what if someone some day
changes Py_UCS4 to a 64-bit signed integer, for example?

what's the preferred style?

1) leave it as is, and let OpenVMS folks live with the
compiler complaint

2) get rid of "code < 0" and hope that nobody messes
up the Py_UCS4 declaration

3) cast "code" to a known unsigned type, e.g:

        if ((unsigned int) code >= 65536)

Cheers /F




From mwh21 at cam.ac.uk  Fri Mar  2 13:58:49 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Fri, 2 Mar 2001 12:58:49 +0000 (GMT)
Subject: [Python-Dev] python-dev summary, 2001-02-15 - 2001-03-01
Message-ID: <Pine.LNX.4.10.10103021255240.18596-100000@localhost.localdomain>

Thanks for all the positive feedback for the last summary!

 This is a summary of traffic on the python-dev mailing list between
 Feb 15 and Feb 28 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list at python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the second python-dev summary written by Michael Hudson.
 Previous summaries were written by Andrew Kuchling and can be found
 at:

   <http://www.amk.ca/python/dev/>

 New summaries will appear at:

  <http://starship.python.net/crew/mwh/summaries/>

 and will continue to be archived at Andrew's site.

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 400

       |                         ]|[                            
       |                         ]|[                            
    60 |                         ]|[                            
       |                         ]|[                            
       |                         ]|[                            
       |                         ]|[                     ]|[    
       |                         ]|[     ]|[             ]|[    
       |                         ]|[     ]|[             ]|[    
    40 |                         ]|[     ]|[             ]|[ ]|[
       |                         ]|[     ]|[             ]|[ ]|[
       |     ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
    20 | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[         ]|[ ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[         ]|[ ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
       | ]|[ ]|[     ]|[     ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
       | ]|[ ]|[     ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
     0 +-033-037-002-008-006-021-071-037-051-012-002-021-054-045
        Thu 15| Sat 17| Mon 19| Wed 21| Fri 23| Sun 25| Tue 27|
            Fri 16  Sun 18  Tue 20  Thu 22  Sat 24  Mon 26  Wed 28

 A slightly quieter week on python-dev.  As you can see, most Python
 developers are too well-adjusted to post much on weekends.  Or
 Mondays.

 There was a lot of traffic on the bugs, patches and checkins lists in
 preparation for the upcoming 2.1b1 release.


    * backwards incompatibility *

 Most of the posts in the large spike in the middle of the posting
 distribution were on the subject of backward compatibility.  On of
 the unexpected (by those of us that hadn't thought too hard about it)
 consequences of nested scopes was that some code using the dreaded
 "from-module-import-*" code inside functions became ambiguous, and
 the plan was to ban such code in Python 2.1.  This provoked a storm
 of protest from many quarters, including python-dev and
 comp.lang.python.  If you really want to read all of this, start
 here:

  <http://mail.python.org/pipermail/python-dev/2001-February/013003.html>

 However, as you will know if you read comp.lang.python, PythonLabs
 took note, and in:

  <http://mail.python.org/pipermail/python-dev/2001-February/013125.html>
 
 Guido announced that the new nested scopes behaviour would be opt-in
 in 2.1, but code that will break in python 2.2 when nested scopes
 become the default will produce a warning in 2.1.  To get the new
 behaviour in a module, one will need to put

    from __future__ import nested_scopes

 at the top of the module.  It is possible this gimmick will be used
 to introduce further backwards compatible features in the future.


    * obmalloc *

 After some more discussion, including Neil Schemenauer pointing out
 that obmalloc might enable him to make the cycle GC faster, obmalloc
 was finally checked in.

 There's a second patch from Vladimir Marangoz implementing a memory
 profiler.  (sorry for the long line)

  <http://sourceforge.net/tracker/index.php?func=detail&aid=401229&group_id=5470&atid=305470>

 Opinion was muted about this; as Neil summed up in:

  <http://mail.python.org/pipermail/python-dev/2001-February/013205.html>

 noone cares enough to put the time into it and review this patch.
 Sufficiently violently wielded opnions may swing the day...


    * pydoc *

 Ka-Ping Yee checked in his amazing pydoc.  pydoc was described in

  <http://mail.python.org/pipermail/python-dev/2001-January/011538.html>

 It gives command line and web browser access to Python's
 documentation, and will be installed as a separate script in 2.1.


    * other stuff *

 It is believed that the case-sensitive import issues mentioned in the
 last summary have been sorted out, although it will be hard to be
 sure until the beta.

 The unit-test discussion petered out.  Nothing has been checked in
 yet.

 The iteraators discussion seems to have disappeared.  At least, your
 author can't find it!

Cheers,
M.




From guido at digicool.com  Fri Mar  2 15:22:27 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 09:22:27 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
Message-ID: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>

I was tickled when I found a quote from Tim Berners-Lee about Python
here: http://www.w3.org/2000/10/swap/#L88

Most quotable part: "Python is a language you can get into on one
battery!"

We should be able to use that for PR somewhere...

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Fri Mar  2 15:32:01 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 02 Mar 2001 14:32:01 +0000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: "A.M. Kuchling"'s message of "Wed, 28 Feb 2001 12:55:12 -0800"
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk>

"A.M. Kuchling" <akuchling at users.sourceforge.net> writes:

> --- NEW FILE: pydoc ---
> #!/usr/bin/env python
> 

Could I make a request that this gets munged to point to the python
that's being installed at build time?  I've just built from CVS,
installed in /usr/local, and:

$ pydoc -g
Traceback (most recent call last):
  File "/usr/local/bin/pydoc", line 3, in ?
    import pydoc
ImportError: No module named pydoc

because the /usr/bin/env python thing hits the older python in /usr
first.

Don't bother if this is actually difficult.

Cheers,
M.




From guido at digicool.com  Fri Mar  2 15:34:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 09:34:37 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: Your message of "02 Mar 2001 14:32:01 GMT."
             <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> 
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net>  
            <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>

> "A.M. Kuchling" <akuchling at users.sourceforge.net> writes:
> 
> > --- NEW FILE: pydoc ---
> > #!/usr/bin/env python
> > 
> 
> Could I make a request that this gets munged to point to the python
> that's being installed at build time?  I've just built from CVS,
> installed in /usr/local, and:
> 
> $ pydoc -g
> Traceback (most recent call last):
>   File "/usr/local/bin/pydoc", line 3, in ?
>     import pydoc
> ImportError: No module named pydoc
> 
> because the /usr/bin/env python thing hits the older python in /usr
> first.
> 
> Don't bother if this is actually difficult.

This could become a standard distutils feature!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From akuchlin at mems-exchange.org  Fri Mar  2 15:56:17 2001
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 2 Mar 2001 09:56:17 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:34:37AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com>
Message-ID: <20010302095617.A11182@ute.cnri.reston.va.us>

On Fri, Mar 02, 2001 at 09:34:37AM -0500, Guido van Rossum wrote:
>> because the /usr/bin/env python thing hits the older python in /usr
>> first.
>> Don't bother if this is actually difficult.
>
>This could become a standard distutils feature!

It already does this for regular distributions (see build_scripts.py),
but running with a newly built Python causes problems; it uses
sys.executable, which results in '#!python' at build time.  I'm not
sure how to fix this; perhaps the Makefile should always set a
BUILDING_PYTHON environment variable, and the Distutils could check
for its being set.  

--amk




From nas at arctrix.com  Fri Mar  2 16:03:00 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 2 Mar 2001 07:03:00 -0800
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302095617.A11182@ute.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Mar 02, 2001 at 09:56:17AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302095617.A11182@ute.cnri.reston.va.us>
Message-ID: <20010302070300.B11722@glacier.fnational.com>

On Fri, Mar 02, 2001 at 09:56:17AM -0500, Andrew Kuchling wrote:
> It already does this for regular distributions (see build_scripts.py),
> but running with a newly built Python causes problems; it uses
> sys.executable, which results in '#!python' at build time.  I'm not
> sure how to fix this; perhaps the Makefile should always set a
> BUILDING_PYTHON environment variable, and the Distutils could check
> for its being set.  

setup.py fix this by assigning sys.executable to $(prefix)/bin/python
before installing.  I don't know if that would break anything
else though.

  Neil



From DavidA at ActiveState.com  Fri Mar  2 02:05:59 2001
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 1 Mar 2001 17:05:59 -0800
Subject: [Python-Dev] Finally, a Python Cookbook!
Message-ID: <PLEJJNOHDIGGLDPOGPJJOEOKCNAA.DavidA@ActiveState.com>

Hello all --

ActiveState is now hosting a site
(http://www.ActiveState.com/PythonCookbook) that will be the beginning of a
series of community-based language-specific cookbooks to be jointly
sponsored by ActiveState and O'Reilly.

The first in the series is the "Python Cookbook".  We will be announcing
this effort at the Python Conference, but wanted to give you a sneak peek at
it ahead of time.

The idea behind it is for it to be a managed open collaborative repository
of Python recipes that implements RCD (rapid content development) for a
cookbook that O'Reilly will eventually publish. The Python Cookbook will be
freely available for review and use by all. It will also be different than
any other project of its kind in one very important way. This will be a
community effort. A book written by the Python community and delivered to
the Python Community, as a handy reference and invaluable aid for those
still to join. The partnership of ActiveState and O?Reilly provide the
framework, the organization, and the resources necessary to help bring this
book to life.

If you've got the time, please dig in your code base for recipes which you
may have developed and consider contributing them.  That way, you'll help us
'seed' the cookbook for its launch at the 9th Python Conference on March
5th!

Whether you have the time to contribute or not, we'd appreciate it if you
registered, browsed the site and gave us feedback at
pythoncookbook at ActiveState.com.

We want to make sure that this site reflects the community's needs, so all
feedback is welcome.

Thanks in advance for all your efforts in making this a successful endeavor.

Thanks,

David Ascher & the Cookbook team
ActiveState - Perl Python Tcl XSLT - Programming for the People

Vote for Your Favorite Perl & Python Programming
Accomplishments in the first Active Awards!
>>http://www.ActiveState.com/Awards  <http://www.activestate.com/awards><<




From gward at cnri.reston.va.us  Fri Mar  2 17:10:53 2001
From: gward at cnri.reston.va.us (Greg Ward)
Date: Fri, 2 Mar 2001 11:10:53 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:34:37AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com>
Message-ID: <20010302111052.A14221@thrak.cnri.reston.va.us>

On 02 March 2001, Guido van Rossum said:
> This could become a standard distutils feature!

It is -- if a script is listed in 'scripts' in setup.py, and it's a Python
script, its #! line is automatically munged to point to the python that's
running the setup script.

Hmmm, this could be a problem if that python hasn't been installed itself
yet.  IIRC, it just trusts sys.executable.

        Greg



From tim.one at home.com  Fri Mar  2 17:27:43 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 2 Mar 2001 11:27:43 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com>

[Guido]
> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88
>
> Most quotable part: "Python is a language you can get into on one
> battery!"

Most baffling part:  "One day, 15 minutes before I had to leave for the
airport, I got my laptop back out of my bag, and sucked off the web the
python 1.6 system ...".  What about python.org steered people toward 1.6?  Of
course, Tim *is* a Tim, and they're not always rational ...





From guido at digicool.com  Fri Mar  2 17:28:59 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 11:28:59 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of "Fri, 02 Mar 2001 11:27:43 EST."
             <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com> 
Message-ID: <200103021628.LAA07147@cj20424-a.reston1.va.home.com>

> [Guido]
> > I was tickled when I found a quote from Tim Berners-Lee about Python
> > here: http://www.w3.org/2000/10/swap/#L88
> >
> > Most quotable part: "Python is a language you can get into on one
> > battery!"
> 
> Most baffling part:  "One day, 15 minutes before I had to leave for the
> airport, I got my laptop back out of my bag, and sucked off the web the
> python 1.6 system ...".  What about python.org steered people toward 1.6?  Of
> course, Tim *is* a Tim, and they're not always rational ...

My guess is this was before 2.0 final was released.  I don't blame
him.  And after all, he's a Tim -- he can do what he wants to! :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas.heller at ion-tof.com  Fri Mar  2 17:38:04 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 2 Mar 2001 17:38:04 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us>
Message-ID: <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>

Gred Ward, who suddenly reappears:
> On 02 March 2001, Guido van Rossum said:
> > This could become a standard distutils feature!
> 
> It is -- if a script is listed in 'scripts' in setup.py, and it's a Python
> script, its #! line is automatically munged to point to the python that's
> running the setup script.
> 
What about this code in build_scripts.py?

  # check if Python is called on the first line with this expression.
  # This expression will leave lines using /usr/bin/env alone; presumably
  # the script author knew what they were doing...)
  first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')

Doesn't this mean that
#!/usr/bin/env python
lines are NOT fixed?

Thomas




From gward at python.net  Fri Mar  2 17:41:24 2001
From: gward at python.net (Greg Ward)
Date: Fri, 2 Mar 2001 11:41:24 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302070300.B11722@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 02, 2001 at 07:03:00AM -0800
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302095617.A11182@ute.cnri.reston.va.us> <20010302070300.B11722@glacier.fnational.com>
Message-ID: <20010302114124.A2826@cthulhu.gerg.ca>

On 02 March 2001, Neil Schemenauer said:
> setup.py fix this by assigning sys.executable to $(prefix)/bin/python
> before installing.  I don't know if that would break anything
> else though.

That *should* work.  Don't think Distutils relies on
"os.path.exists(sys.executable)" anywhere....

...oops, may have spoken too soon: the byte-compilation code (in
distutils/util.py) spawns sys.executable.  So if byte-compilation is
done in the same run as installing scripts, you lose.  Fooey.

        Greg
-- 
Greg Ward - just another /P(erl|ython)/ hacker          gward at python.net
http://starship.python.net/~gward/
Heisenberg may have slept here.



From gward at python.net  Fri Mar  2 17:47:39 2001
From: gward at python.net (Greg Ward)
Date: Fri, 2 Mar 2001 11:47:39 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>; from thomas.heller@ion-tof.com on Fri, Mar 02, 2001 at 05:38:04PM +0100
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>
Message-ID: <20010302114739.B2826@cthulhu.gerg.ca>

On 02 March 2001, Thomas Heller said:
> Gred Ward, who suddenly reappears:

"He's not dead, he's just resting!"

> What about this code in build_scripts.py?
> 
>   # check if Python is called on the first line with this expression.
>   # This expression will leave lines using /usr/bin/env alone; presumably
>   # the script author knew what they were doing...)
>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')

Hmm, that's a recent change:

  revision 1.7
  date: 2001/02/28 20:59:33;  author: akuchling;  state: Exp;  lines: +5 -3
  Leave #! lines featuring /usr/bin/env alone

> Doesn't this mean that
> #!/usr/bin/env python
> lines are NOT fixed?

Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
lines is the right thing to do?  I happen to think it's not; I think #!
lines should always be munged (assuming this is a Python script, of
course).

        Greg
-- 
Greg Ward - nerd                                        gward at python.net
http://starship.python.net/~gward/
Disclaimer: All rights reserved. Void where prohibited. Limit 1 per customer.



From akuchlin at mems-exchange.org  Fri Mar  2 17:54:59 2001
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 2 Mar 2001 11:54:59 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302114739.B2826@cthulhu.gerg.ca>; from gward@python.net on Fri, Mar 02, 2001 at 11:47:39AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook> <20010302114739.B2826@cthulhu.gerg.ca>
Message-ID: <20010302115459.A3029@ute.cnri.reston.va.us>

On Fri, Mar 02, 2001 at 11:47:39AM -0500, Greg Ward wrote:
>>   # check if Python is called on the first line with this expression.
>>   # This expression will leave lines using /usr/bin/env alone; presumably
>>   # the script author knew what they were doing...)
>>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')
>
>Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
>lines is the right thing to do?  I happen to think it's not; I think #!
>lines should always be munged (assuming this is a Python script, of
>course).

Disagree; as the comment says, "presumably the script author knew what
they were doing..." when they put /usr/bin/env at the top.  This had
to be done so that pydoc could be installed at all.

--amk



From guido at digicool.com  Fri Mar  2 18:01:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 12:01:50 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: Your message of "Fri, 02 Mar 2001 11:54:59 EST."
             <20010302115459.A3029@ute.cnri.reston.va.us> 
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook> <20010302114739.B2826@cthulhu.gerg.ca>  
            <20010302115459.A3029@ute.cnri.reston.va.us> 
Message-ID: <200103021701.MAA07349@cj20424-a.reston1.va.home.com>

> >>   # check if Python is called on the first line with this expression.
> >>   # This expression will leave lines using /usr/bin/env alone; presumably
> >>   # the script author knew what they were doing...)
> >>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')
> >
> >Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
> >lines is the right thing to do?  I happen to think it's not; I think #!
> >lines should always be munged (assuming this is a Python script, of
> >course).
> 
> Disagree; as the comment says, "presumably the script author knew what
> they were doing..." when they put /usr/bin/env at the top.  This had
> to be done so that pydoc could be installed at all.

Don't understand the list sentence -- what started this thread is that
when pydoc is installed but there's another (older) installed python
that is first on $PATH, pydoc breaks.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Fri Mar  2 21:34:31 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 2 Mar 2001 21:34:31 +0100
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:22:27AM -0500
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
Message-ID: <20010302213431.Q9678@xs4all.nl>

On Fri, Mar 02, 2001 at 09:22:27AM -0500, Guido van Rossum wrote:

> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88

> Most quotable part: "Python is a language you can get into on one
> battery!"

Actually, I think this bit is more important:

"I remember Guido trying to persuade me to use python as I was trying to
persuade him to write web software!"

So when can we expect the new Python web interface ? :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at acm.org  Fri Mar  2 21:32:27 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 2 Mar 2001 15:32:27 -0500 (EST)
Subject: [Python-Dev] doc tree frozen for 2.1b1
Message-ID: <15008.859.4988.155789@localhost.localdomain>

  The documentation is frozen until the 2.1b1 annonucement goes out.
I have a couple of checkins to make, but the formatted HTML for the
Windows installer has already been cut & shipped.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Fri Mar  2 21:41:34 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 15:41:34 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of "Fri, 02 Mar 2001 21:34:31 +0100."
             <20010302213431.Q9678@xs4all.nl> 
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>  
            <20010302213431.Q9678@xs4all.nl> 
Message-ID: <200103022041.PAA12359@cj20424-a.reston1.va.home.com>

> Actually, I think this bit is more important:
> 
> "I remember Guido trying to persuade me to use python as I was trying to
> persuade him to write web software!"
> 
> So when can we expect the new Python web interface ? :-)

There's actually a bit of a sad story.  I really liked the early web,
and wrote one of the earliest graphical web browsers (before Mozilla;
I was using Python and stdwin).  But I didn't get the importance of
dynamic content, and initially scoffed at the original cgi.py,
concocted by Michael McLay (always a good nose for trends!) and Steven
Majewski (ditto).

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at acm.org  Fri Mar  2 21:49:09 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 2 Mar 2001 15:49:09 -0500 (EST)
Subject: [Python-Dev] Python 2.1 beta 1 documentation online
Message-ID: <15008.1861.84677.687041@localhost.localdomain>

  The documentation for Python 2.1 beta 1 is now online:

	http://python.sourceforge.net/devel-docs/

  This is the same as the documentation that will ship with the
Windows installer.
  This is the online location of the development version of the
documentation.  As I make updates to the documentation, this will be
updated periodically; the "front page" will indicate the date of the
most recent update.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Fri Mar  2 23:46:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 17:46:09 -0500
Subject: [Python-Dev] Python 2.1b1 released
Message-ID: <200103022246.RAA18529@cj20424-a.reston1.va.home.com>

With great pleasure I announce the release of Python 2.1b1.  This is a
big step towards the release of Python 2.1; the final release is
expected to take place in mid April.

Find out all about 2.1b1, including docs and downloads (Windows
installer and source tarball), at the 2.1 release page:

    http://www.python.org/2.1/


WHAT'S NEW?
-----------

For the big picture, see Andrew Kuchling's What New in Python 2.1:

    http://www.amk.ca/python/2.1/

For more detailed release notes, see SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=25924

The big news since 2.1a2 was released a month ago:

- Nested Scopes (PEP 227)[*] are now optional.  They must be enabled
  by including the statement "from __future__ import nested_scopes" at
  the beginning of a module (PEP 236).  Nested scopes will be a
  standard feature in Python 2.2.

- Compile-time warnings are now generated for a number of conditions
  that will break or change in meaning when nested scopes are enabled.

- The new tool *pydoc* displays module documentation, extracted from
  doc strings.  It works in a text environment as well as in a GUI
  environment (where it cooperates with a web browser).  On Windows,
  this is in the Start menu as "Module Docs".

- Case-sensitive import.  On systems with case-insensitive but
  case-preserving file systems, such as Windows (including Cygwin) and
  MacOS, import now continues to search the next directory on sys.path
  when a case mismatch is detected.  See PEP 235 for the full scoop.

- New platforms.  Python 2.1 now fully supports MacOS X, Cygwin, and
  RISCOS.

[*] For PEPs (Python Enhancement Proposals), see the PEP index:

    http://python.sourceforge.net/peps/

I hope to see you all next week at the Python9 conference in Long
Beach, CA:

    http://www.python9.org

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Sat Mar  3 19:21:44 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 3 Mar 2001 13:21:44 -0500 (EST)
Subject: [Python-Dev] Bug fix releases (was Re: Nested scopes resolution -- you can breathe again!)
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org>
Message-ID: <200103031821.NAA24060@panix3.panix.com>

[posted to c.l.py with cc to python-dev]

[I apologize for the delay in posting this, but it's taken me some time
to get my thoughts straight.  I hope that by posting this right before
IPC9 there'll be a chance to get some good discussion in person.]

In article <mailman.982897324.9109.python-list at python.org>,
Guido van Rossum  <guido at digicool.com> wrote:
>
>We have clearly underestimated how much code the nested scopes would
>break, but more importantly we have underestimated how much value our
>community places on stability.  

I think so, yes, on that latter clause.  I think perhaps it wasn't clear
at the time, but I believe that much of the yelling over "print >>" was
less over the specific design but because it came so close to the
release of 2.0 that there wasn't *time* to sit down and talk things
over rationally.

As I see it, there's a natural tension between between adding features
and delivering bug fixes.  Particularly because of Microsoft, I think
that upgrading to a feature release to get bug fixes has become anathema
to a lot of people, and I think that seeing features added or changed
close to a release reminds people too much of the Microsoft upgrade
treadmill.

>So here's the deal: we'll make nested scopes an optional feature in
>2.1, default off, selectable on a per-module basis using a mechanism
>that's slightly hackish but is guaranteed to be safe.  (See below.)
>
>At the same time, we'll augment the compiler to detect all situations
>that will break when nested scopes are introduced in the future, and
>issue warnings for those situations.  The idea here is that warnings
>don't break code, but encourage folks to fix their code so we can
>introduce nested scopes in 2.2.  Given our current pace of releases
>that should be about 6 months warning.

As some other people have pointed out, six months is actually a rather
short cycle when it comes to delivering enterprise applications across
hundreds or thousands of machines.  Notice how many people have said
they haven't upgraded from 1.5.2 yet!  Contrast that with the quickness
of the 1.5.1 to 1.5.2 upgrade.

I believe that "from __future__" is a good idea, but it is at best a
bandage over the feature/bug fix tension.  I think that the real issue
is that in the world of core Python development, release N is always a
future release, never the current release; as soon as release N goes out
the door into production, it immediately becomes release N-1 and forever
dead to development

Rather than change that mindset directly, I propose that we move to a
forked model of development.  During the development cycle for any given
release, release (N-1).1 is also a live target -- but strictly for bug
fixes.  I suggest that shortly after the release for Na1, there should
also be a release for (N-1).1b1; shortly after the release of Nb1, there
would be (N-1).1b2.  And (N-1).1 would be released shortly after N.

This means that each feature-based release gets one-and-only-one pure
bugfix release.  I think this will do much to promote the idea of Python
as a stable platform for application development.

There are a number of ways I can see this working, including setting up
a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
But I don't think this will work at all unless the PythonLabs team is at
least willing to "bless" the bugfix release.  Uncle Timmy has been known
to make snarky comments about forever maintaining 1.5.2; I think this is
a usable compromise that will take relatively little effort to keep
going once it's set up.

I think one key advantage of this approach is that a lot more people
will be willing to try out a beta of a strict bugfix release, so the
release N bugfixes will get more testing than they otherwise would.

If there's interest in this idea, I'll write it up as a formal PEP.

It's too late for my proposed model to work during the 2.1 release
cycle, but I think it would be an awfully nice gesture to the community
to take a month off after 2.1 to create 2.0.1, before going on to 2.2.



BTW, you should probably blame Fredrik for this idea.  ;-)  If he had
skipped providing 1.5.2 and 2.0 versions of sre, I probably wouldn't
have considered this a workable idea.  I was just thinking that it was
too bad there wasn't a packaged version of 2.0 containing the new sre,
and that snowballed into this.
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be



From guido at digicool.com  Sat Mar  3 20:10:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 14:10:35 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 13:21:44 EST."
             <200103031821.NAA24060@panix3.panix.com> 
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org>  
            <200103031821.NAA24060@panix3.panix.com> 
Message-ID: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>

Aahz writes:
> [posted to c.l.py with cc to python-dev]
> 
> [I apologize for the delay in posting this, but it's taken me some time
> to get my thoughts straight.  I hope that by posting this right before
> IPC9 there'll be a chance to get some good discussion in person.]

Excellent.  Even in time for me to mention this in my keynote! :-)

> In article <mailman.982897324.9109.python-list at python.org>,
> Guido van Rossum  <guido at digicool.com> wrote:
> >
> >We have clearly underestimated how much code the nested scopes would
> >break, but more importantly we have underestimated how much value our
> >community places on stability.  
> 
> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
> at the time, but I believe that much of the yelling over "print >>" was
> less over the specific design but because it came so close to the
> release of 2.0 that there wasn't *time* to sit down and talk things
> over rationally.

In my eyes the issues are somewhat different: "print >>" couldn't
possibly break existing code; nested scopes clearly do, and that's why
we decided to use the __future__ statement.

But I understand that you're saying that the community has grown so
conservative that it can't stand new features even if they *are* fully
backwards compatible.

I wonder, does that extend to new library modules?  Is there also
resistance against the growth there?  I don't think so -- if anything,
people are clamoring for more stuff to become standard (while at the
same time I feel some pressure to cut dead wood, like the old SGI
multimedia modules).

So that relegates us at PythonLabs to a number of things: coding new
modules (boring), or trying to improve performance of the virtual
machine (equally boring, and difficult to boot), or fixing bugs (did I
mention boring? :-).

So what can we do for fun?  (Besides redesigning Zope, which is lots
of fun, but runs into the same issues.)

> As I see it, there's a natural tension between between adding features
> and delivering bug fixes.  Particularly because of Microsoft, I think
> that upgrading to a feature release to get bug fixes has become anathema
> to a lot of people, and I think that seeing features added or changed
> close to a release reminds people too much of the Microsoft upgrade
> treadmill.

Actually, I though that the Microsoft way these days was to smuggle
entire new subsystems into bugfix releases.  What else are "Service
Packs" for? :-)

> >So here's the deal: we'll make nested scopes an optional feature in
> >2.1, default off, selectable on a per-module basis using a mechanism
> >that's slightly hackish but is guaranteed to be safe.  (See below.)
> >
> >At the same time, we'll augment the compiler to detect all situations
> >that will break when nested scopes are introduced in the future, and
> >issue warnings for those situations.  The idea here is that warnings
> >don't break code, but encourage folks to fix their code so we can
> >introduce nested scopes in 2.2.  Given our current pace of releases
> >that should be about 6 months warning.
> 
> As some other people have pointed out, six months is actually a rather
> short cycle when it comes to delivering enterprise applications across
> hundreds or thousands of machines.  Notice how many people have said
> they haven't upgraded from 1.5.2 yet!  Contrast that with the quickness
> of the 1.5.1 to 1.5.2 upgrade.

Clearly, we're taking this into account.  If we believed you all
upgraded the day we announced a new release, we'd be even more
conservative with adding new features (at least features introducing
incompatibilities).

> I believe that "from __future__" is a good idea, but it is at best a
> bandage over the feature/bug fix tension.  I think that the real issue
> is that in the world of core Python development, release N is always a
> future release, never the current release; as soon as release N goes out
> the door into production, it immediately becomes release N-1 and forever
> dead to development
> 
> Rather than change that mindset directly, I propose that we move to a
> forked model of development.  During the development cycle for any given
> release, release (N-1).1 is also a live target -- but strictly for bug
> fixes.  I suggest that shortly after the release for Na1, there should
> also be a release for (N-1).1b1; shortly after the release of Nb1, there
> would be (N-1).1b2.  And (N-1).1 would be released shortly after N.

Your math at first confused the hell out of me, but I see what you
mean.  You want us to spend time on 2.0.1 which should be a bugfix
release for 2.0, while at the same time working on 2.1 which is a new
feature release.

Guess what -- I am secretly (together with the PSU) planning a 2.0.1
release.  I'm waiting however for obtaining the ownership rights to
the 2.0 release, so we can fix the GPL incompatibility issue in the
license at the same time.  (See the 1.6.1 release.)  I promise that
2.0.1, unlike 1.6.1, will contain more than a token set of real
bugfixes.  Hey, we already have a branch in the CVS tree for 2.0.1
development!  (Tagged "release20-maint".)

We could use some checkins on that branch though.

> This means that each feature-based release gets one-and-only-one pure
> bugfix release.  I think this will do much to promote the idea of Python
> as a stable platform for application development.

Anything we can do to please those republicans! :-)

> There are a number of ways I can see this working, including setting up
> a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
> But I don't think this will work at all unless the PythonLabs team is at
> least willing to "bless" the bugfix release.  Uncle Timmy has been known
> to make snarky comments about forever maintaining 1.5.2; I think this is
> a usable compromise that will take relatively little effort to keep
> going once it's set up.

With the CVS branch it's *trivial* to keep it going.  We should have
learned from the Tcl folks, they've had 8.NpM releases for a while.

> I think one key advantage of this approach is that a lot more people
> will be willing to try out a beta of a strict bugfix release, so the
> release N bugfixes will get more testing than they otherwise would.

Wait a minute!  Now you're making it too complicated.  Betas of bugfix
releases?  That seems to defeat the purpose.  What kind of
beta-testing does a pure bugfix release need?  Presumably each
individual bugfix applied has already been tested before it is checked
in!  Or are you thinking of adding small new features to a "bugfix"
release?  That ought to be a no-no according to your own philosophy!

> If there's interest in this idea, I'll write it up as a formal PEP.

Please do.

> It's too late for my proposed model to work during the 2.1 release
> cycle, but I think it would be an awfully nice gesture to the community
> to take a month off after 2.1 to create 2.0.1, before going on to 2.2.

It's not too late, as I mentioned.  We'll also do this for 2.1.

> BTW, you should probably blame Fredrik for this idea.  ;-)  If he had
> skipped providing 1.5.2 and 2.0 versions of sre, I probably wouldn't
> have considered this a workable idea.  I was just thinking that it was
> too bad there wasn't a packaged version of 2.0 containing the new sre,
> and that snowballed into this.

So the new (2.1) sre code should be merged back into 2.0.1, right?
Fredrik, go ahead!  We'll start planning for the 2.0.1 release right
after we're back from the conference.

BTW, See you at the conference!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at acm.org  Sat Mar  3 20:30:13 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:30:13 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<200103031910.OAA21663@cj20424-a.reston1.va.home.com>
Message-ID: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > I wonder, does that extend to new library modules?  Is there also
 > resistance against the growth there?  I don't think so -- if anything,
 > people are clamoring for more stuff to become standard (while at the

  There is still the issue of name clashes; introducing a new module
in the top-level namespace introduces a potential conflict with
someone's application-specific modules.  This is a good reason for us
to get the standard library packagized sooner rather than later
(although this would have to be part of a "feature" release;).

 > Wait a minute!  Now you're making it too complicated.  Betas of bugfix
 > releases?  That seems to defeat the purpose.  What kind of

  Betas of the bugfix releases are important -- portability testing is
fairly difficult to do when all we have are Windows and Linux/x86
boxes.  There's definately a need for at least one beta.  We probably
don't need to lengthy, multi-phase alpha/alpha/beta/beta/candidate
cycle we're using for feature releases now.

 > It's not too late, as I mentioned.  We'll also do this for 2.1.

  Managing the bugfix releases would also be an excellent task for
someone who's expecting to use the bugfix releases more than the
feature releases -- the mentality has to be right for the task.  I
know I'm much more of a "features" person, and would have a hard time
not crossing the line if it were up to me what went into a bugfix
release.

 > BTW, See you at the conference!

  If we don't get snowed in!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Sat Mar  3 20:44:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 14:44:19 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 14:30:13 EST."
             <15009.17989.88203.844343@cj42289-a.reston1.va.home.com> 
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <200103031910.OAA21663@cj20424-a.reston1.va.home.com>  
            <15009.17989.88203.844343@cj42289-a.reston1.va.home.com> 
Message-ID: <200103031944.OAA21835@cj20424-a.reston1.va.home.com>

> Guido van Rossum writes:
>  > I wonder, does that extend to new library modules?  Is there also
>  > resistance against the growth there?  I don't think so -- if anything,
>  > people are clamoring for more stuff to become standard (while at the
> 
>   There is still the issue of name clashes; introducing a new module
> in the top-level namespace introduces a potential conflict with
> someone's application-specific modules.  This is a good reason for us
> to get the standard library packagized sooner rather than later
> (although this would have to be part of a "feature" release;).

But of course the library repackaging in itself would cause enormous
outcries, because in a very real sense it *does* break code.

>  > Wait a minute!  Now you're making it too complicated.  Betas of bugfix
>  > releases?  That seems to defeat the purpose.  What kind of
> 
>   Betas of the bugfix releases are important -- portability testing is
> fairly difficult to do when all we have are Windows and Linux/x86
> boxes.  There's definately a need for at least one beta.  We probably
> don't need to lengthy, multi-phase alpha/alpha/beta/beta/candidate
> cycle we're using for feature releases now.

OK, you can have *one* beta.  That's it.

>  > It's not too late, as I mentioned.  We'll also do this for 2.1.
> 
>   Managing the bugfix releases would also be an excellent task for
> someone who's expecting to use the bugfix releases more than the
> feature releases -- the mentality has to be right for the task.  I
> know I'm much more of a "features" person, and would have a hard time
> not crossing the line if it were up to me what went into a bugfix
> release.

That's how all of us here at PythonLabs are feeling...  I feel a
community task coming.  I'll bless a 2.0.1 release and the general
idea of bugfix releases, but doing the grunt work won't be a
PythonLabs task.  Someone else inside or outside Python-dev will have
to do some work.  Aahz?

>  > BTW, See you at the conference!
> 
>   If we don't get snowed in!

Good point.  East coasters flying to LA on Monday, watch your weather
forecast!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at cj42289-a.reston1.va.home.com  Sat Mar  3 20:47:49 2001
From: fdrake at cj42289-a.reston1.va.home.com (Fred Drake)
Date: Sat,  3 Mar 2001 14:47:49 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010303194749.629AC28803@cj42289-a.reston1.va.home.com>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


Additional information on using non-Microsoft compilers on Windows when
using the Distutils, contributed by Rene Liebscher.




From tim.one at home.com  Sat Mar  3 20:55:09 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 3 Mar 2001 14:55:09 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>

[Fred L. Drake, Jr.]
> ...
>   Managing the bugfix releases would also be an excellent task for
> someone who's expecting to use the bugfix releases more than the
> feature releases -- the mentality has to be right for the task.  I
> know I'm much more of a "features" person, and would have a hard time
> not crossing the line if it were up to me what went into a bugfix
> release.

Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
nobody responded.  Past is prelude ...

everyone-is-generous-with-everyone-else's-time-ly y'rs  - tim




From fdrake at acm.org  Sat Mar  3 20:53:45 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:53:45 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
References: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
Message-ID: <15009.19401.787058.744462@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
 > serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
 > Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
 > nobody responded.  Past is prelude ...

  And as long as that continues, I'd have to conclude that the user
base is largely happy with the way we've done things.  *If* users want
bugfix releases badly enough, someone will do them.  If not, hey,
features can be useful!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From fdrake at acm.org  Sat Mar  3 20:54:31 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:54:31 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031944.OAA21835@cj20424-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<200103031910.OAA21663@cj20424-a.reston1.va.home.com>
	<15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
	<200103031944.OAA21835@cj20424-a.reston1.va.home.com>
Message-ID: <15009.19447.154958.449303@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > But of course the library repackaging in itself would cause enormous
 > outcries, because in a very real sense it *does* break code.

  That's why it has to be a feature release.  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Sat Mar  3 21:07:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 15:07:09 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 14:55:09 EST."
             <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com> 
Message-ID: <200103032007.PAA21925@cj20424-a.reston1.va.home.com>

> [Fred L. Drake, Jr.]
> > ...
> >   Managing the bugfix releases would also be an excellent task for
> > someone who's expecting to use the bugfix releases more than the
> > feature releases -- the mentality has to be right for the task.  I
> > know I'm much more of a "features" person, and would have a hard time
> > not crossing the line if it were up to me what went into a bugfix
> > release.

[Uncle Timmy]
> Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
> serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
> Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
> nobody responded.  Past is prelude ...
> 
> everyone-is-generous-with-everyone-else's-time-ly y'rs  - tim

I understand the warning.  How about the following (and then I really
have to go write my keynote speech :-).  PythonLabs will make sure
that it will happen.  But how much stuff goes into the bugfix release
is up to the community.

We'll give SourceForge commit privileges to individuals who want to do
serious work on the bugfix branch -- but before you get commit
privileges, you must first show that you know what you are doing by
submitting useful patches through the SourceForge patch mananger.

Since a lot of the 2.0.1 effort will be deciding which code from 2.1
to merge back into 2.0.1, it may not make sense to upload context
diffs to SourceForge.  Instead, we'll accept reasoned instructions for
specific patches to be merged back.  Instructions like "cvs update
-j<rev1> -j<rev2> <file>" are very helpful; please also explain why!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Sat Mar  3 22:55:28 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 3 Mar 2001 16:55:28 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <mailman.983646726.27322.python-list@python.org>
Message-ID: <200103032155.QAA05049@panix3.panix.com>

In article <mailman.983646726.27322.python-list at python.org>,
Guido van Rossum  <guido at digicool.com> wrote:
>Aahz writes:
>>
>> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
>> at the time, but I believe that much of the yelling over "print >>" was
>> less over the specific design but because it came so close to the
>> release of 2.0 that there wasn't *time* to sit down and talk things
>> over rationally.
>
>In my eyes the issues are somewhat different: "print >>" couldn't
>possibly break existing code; nested scopes clearly do, and that's why
>we decided to use the __future__ statement.
>
>But I understand that you're saying that the community has grown so
>conservative that it can't stand new features even if they *are* fully
>backwards compatible.

Then you understand incorrectly.  There's a reason why I emphasized
"*time*" up above.  It takes time to grok a new feature, time to think
about whether and how we should argue in favor or against it, time to
write comprehensible and useful arguments.  In hindsight, I think you
probably did make the right design decision on "print >>", no matter how
ugly I think it looks.  But I still think you made absolutely the wrong
decision to include it in 2.0.

>So that relegates us at PythonLabs to a number of things: coding new
>modules (boring), or trying to improve performance of the virtual
>machine (equally boring, and difficult to boot), or fixing bugs (did I
>mention boring? :-).
>
>So what can we do for fun?  (Besides redesigning Zope, which is lots
>of fun, but runs into the same issues.)

Write new versions of Python.  You've come up with a specific protocol
in a later post that I think I approve of; I was trying to suggest a
balance between lots of grunt work maintenance and what I see as
perpetual language instability in the absence of any bug fix releases.

>Your math at first confused the hell out of me, but I see what you
>mean.  You want us to spend time on 2.0.1 which should be a bugfix
>release for 2.0, while at the same time working on 2.1 which is a new
>feature release.

Yup.  The idea is that because it's always an N and N-1 pair, the base
code is the same for both and applying patches to both should be
(relatively speaking) a small amount of extra work.  Most of the work
lies in deciding *which* patches should go into N-1.

>Guess what -- I am secretly (together with the PSU) planning a 2.0.1
>release.  I'm waiting however for obtaining the ownership rights to
>the 2.0 release, so we can fix the GPL incompatibility issue in the
>license at the same time.  (See the 1.6.1 release.)  I promise that
>2.0.1, unlike 1.6.1, will contain more than a token set of real
>bugfixes.  Hey, we already have a branch in the CVS tree for 2.0.1
>development!  (Tagged "release20-maint".)

Yay!  (Sorry, I'm not much of a CVS person; the one time I tried using
it, I couldn't even figure out where to download the software.  Call me
stupid.)

>We could use some checkins on that branch though.

Fair enough.

>> This means that each feature-based release gets one-and-only-one pure
>> bugfix release.  I think this will do much to promote the idea of Python
>> as a stable platform for application development.
>
>Anything we can do to please those republicans! :-)

<grin>

>> There are a number of ways I can see this working, including setting up
>> a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
>> But I don't think this will work at all unless the PythonLabs team is at
>> least willing to "bless" the bugfix release.  Uncle Timmy has been known
>> to make snarky comments about forever maintaining 1.5.2; I think this is
>> a usable compromise that will take relatively little effort to keep
>> going once it's set up.
>
>With the CVS branch it's *trivial* to keep it going.  We should have
>learned from the Tcl folks, they've had 8.NpM releases for a while.

I'm suggesting having one official PythonLabs-created bug fix release as
being a small incremental effort over the work in the feature release.
But if you want it to be an entirely community-driven effort, I can't
argue with that.

My one central point is that I think this will fail if PythonLabs
doesn't agree to formally certify each release.

>> I think one key advantage of this approach is that a lot more people
>> will be willing to try out a beta of a strict bugfix release, so the
>> release N bugfixes will get more testing than they otherwise would.
>
>Wait a minute!  Now you're making it too complicated.  Betas of bugfix
>releases?  That seems to defeat the purpose.  What kind of
>beta-testing does a pure bugfix release need?  Presumably each
>individual bugfix applied has already been tested before it is checked
>in!  

"The difference between theory and practice is that in theory, there is
no difference, but in practice, there is."

I've seen too many cases where a bugfix introduced new bugs somewhere
else.  Even if "tested", there might be a border case where an
unexpected result shows up.  Finally, there's the issue of system
testing, making sure the entire package of bugfixes works correctly.

The main reason I suggested two betas was to "lockstep" the bugfix
release to the next version's feature release.

>Or are you thinking of adding small new features to a "bugfix"
>release?  That ought to be a no-no according to your own philosophy!

That's correct.  One problem, though, is that sometimes it's a little
difficult to agree on whether a particular piece of code is a feature or
a bugfix.  For example, the recent work to resolve case-sensitive
imports could be argued either way -- and if we want Python 2.0 to run
on OS X, we'd better decide that it's a bugfix.  ;-)

>> If there's interest in this idea, I'll write it up as a formal PEP.
>
>Please do.

Okay, I'll do it after the conference.  I've e-mailed Barry to ask for a
PEP number.
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be



From guido at digicool.com  Sat Mar  3 23:18:45 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 17:18:45 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 16:55:28 EST."
             <200103032155.QAA05049@panix3.panix.com> 
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <mailman.983646726.27322.python-list@python.org>  
            <200103032155.QAA05049@panix3.panix.com> 
Message-ID: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>

[Aahz]
> >> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
> >> at the time, but I believe that much of the yelling over "print >>" was
> >> less over the specific design but because it came so close to the
> >> release of 2.0 that there wasn't *time* to sit down and talk things
> >> over rationally.

[Guido]
> >In my eyes the issues are somewhat different: "print >>" couldn't
> >possibly break existing code; nested scopes clearly do, and that's why
> >we decided to use the __future__ statement.
> >
> >But I understand that you're saying that the community has grown so
> >conservative that it can't stand new features even if they *are* fully
> >backwards compatible.

[Aahz]
> Then you understand incorrectly.  There's a reason why I emphasized
> "*time*" up above.  It takes time to grok a new feature, time to think
> about whether and how we should argue in favor or against it, time to
> write comprehensible and useful arguments.  In hindsight, I think you
> probably did make the right design decision on "print >>", no matter how
> ugly I think it looks.  But I still think you made absolutely the wrong
> decision to include it in 2.0.

Then I respectfully disagree.  We took plenty of time to discuss
"print >>" amongst ourselves.  I don't see the point of letting the
whole community argue about every little new idea before we include it
in a release.  We want good technical feedback, of course.  But if it
takes time to get emotionally used to an idea, you can use your own
time.

> >With the CVS branch it's *trivial* to keep it going.  We should have
> >learned from the Tcl folks, they've had 8.NpM releases for a while.
> 
> I'm suggesting having one official PythonLabs-created bug fix release as
> being a small incremental effort over the work in the feature release.
> But if you want it to be an entirely community-driven effort, I can't
> argue with that.

We will surely put in an effort, but we're limited in what we can do,
so I'm inviting the community to pitch in.  Even just a wish-list of
fixes that are present in 2.1 that should be merged back into 2.0.1
would help!

> My one central point is that I think this will fail if PythonLabs
> doesn't agree to formally certify each release.

Of course we will do that -- I already said so.  And not just for
2.0.1 -- for all bugfix releases, as long as they make sense.

> I've seen too many cases where a bugfix introduced new bugs somewhere
> else.  Even if "tested", there might be a border case where an
> unexpected result shows up.  Finally, there's the issue of system
> testing, making sure the entire package of bugfixes works correctly.

I hope that the experience with 2.1 will validate most bugfixes that
go into 2.0.1.

> The main reason I suggested two betas was to "lockstep" the bugfix
> release to the next version's feature release.

Unclear what you want there.  Why tie the two together?  How?

> >Or are you thinking of adding small new features to a "bugfix"
> >release?  That ought to be a no-no according to your own philosophy!
> 
> That's correct.  One problem, though, is that sometimes it's a little
> difficult to agree on whether a particular piece of code is a feature or
> a bugfix.  For example, the recent work to resolve case-sensitive
> imports could be argued either way -- and if we want Python 2.0 to run
> on OS X, we'd better decide that it's a bugfix.  ;-)

But the Windows change is clearly a feature, so that can't be added to
2.0.1.  We'll have to discuss this particular one.  If 2.0 doesn't
work on MacOS X now, why couldn't MacOS X users install 2.1?  They
can't have working code that breaks, can they?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Sun Mar  4 06:18:05 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 4 Mar 2001 00:18:05 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGJDAA.tim.one@home.com>

FYI, in reviewing Misc/HISTORY, it appears that the last Python release
*called* a "pure bugfix release" was in November of 1994 (1.1.1) -- although
"a few new features were added to tkinter" anyway.

fine-by-me-if-we-just-keep-up-the-good-work<wink>-ly y'rs  - tim




From tim.one at home.com  Sun Mar  4 07:00:44 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 4 Mar 2001 01:00:44 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMHJDAA.tim.one@home.com>

[Aahz]
> ...
> For example, the recent work to resolve case-sensitive imports could
> be argued either way -- and if we want Python 2.0 to run on OS X,
> we'd better decide that it's a bugfix.  ;-)

[Guido]
> But the Windows change is clearly a feature,

Yes.

> so that can't be added to 2.0.1.

That's what Aahz is debating.

> We'll have to discuss this particular one.  If 2.0 doesn't
> work on MacOS X now, why couldn't MacOS X users install 2.1?  They
> can't have working code that breaks, can they?

You're a Giant Corporation that ships a multi-platform product, including
Python 2.0.  Since your IT dept is frightened of its own shadow, they won't
move to 2.1.  Since there is no bound to your greed, you figure that even if
there are only a dozen MacOS X users in the world, you could make 10 bucks
off of them if only you can talk PythonLabs into treating the lack of 2.0
MacOS X support as "a bug", getting PythonLabs to backstitch the port into a
2.0 follow-on (*calling* it 2.0.x serves to pacify your IT paranoids).  No
cost to you, and 10 extra dollars in your pocket.  Everyone wins <wink>.

There *are* some companies so unreasonable in their approach.  Replace "a
dozen" and "10 bucks" by much higher numbers, and the number of companies
mushrooms accordingly.

If we put out a release that actually did nothing except fix legitimate bugs,
PythonLabs may have enough fingers to count the number of downloads.  For
example, keen as *I* was to see a bugfix release for the infamous 1.5.2
"invalid tstate" bug, I didn't expect anyone would pick it up except for Mark
Hammond and the other guy who bumped into it (it was very important to them).
Other people simply won't pick it up unless and until they bump into the bug
it fixes, and due to the same "if it's not obviously broken, *any* change is
dangerous" fear that motivates everyone clinging to old releases by choice.

Curiously, I eventually got my Win95 box into a state where it routinely ran
for a solid week without crashing (the MTBF at the end was about 100x higher
than when I got the machine).  I didn't do that by avoiding MS updates, but
by installing *every* update they offered ASAP, even for subsystems I had no
intention of ever using.  That's the contrarian approach to keeping your
system maximally stable, relying on the observation that the code that works
best is extremely likely to be the code that the developers use themselves.

If someone thinks there's a market for Python bugfix releases that's worth
more than it costs, great -- they can get filthy rich off my appalling lack
of vision <wink>.

"worth-more-than-it-costs"-is-key-ly y'rs  - tim




From tim.one at home.com  Sun Mar  4 07:50:58 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 4 Mar 2001 01:50:58 -0500
Subject: [Python-Dev] a small C style question
In-Reply-To: <05f101c0a2f3$cf4bae10$e46940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMLJDAA.tim.one@home.com>

[Fredrik Lundh]
> DEC's OpenVMS compiler are a bit pickier than most other compilers.
> among other things, it correctly notices that the "code" variable in
> this statement is an unsigned variable:
>
>     UNICODEDATA:
>
>         if (code < 0 || code >= 65536)
>     ........^
>     %CC-I-QUESTCOMPARE, In this statement, the unsigned
>     expression "code" is being compared with a relational
>     operator to a constant whose value is not greater than
>     zero.  This might not be what you intended.
>     at line number 285 in file UNICODEDATA.C
>
> the easiest solution would of course be to remove the "code < 0"
> part, but code is a Py_UCS4 variable.  what if someone some day
> changes Py_UCS4 to a 64-bit signed integer, for example?
>
> what's the preferred style?
>
> 1) leave it as is, and let OpenVMS folks live with the
> compiler complaint
>
> 2) get rid of "code < 0" and hope that nobody messes
> up the Py_UCS4 declaration
>
> 3) cast "code" to a known unsigned type, e.g:
>
>         if ((unsigned int) code >= 65536)

#2.  The comment at the declaration of Py_UCS4 insists that an unsigned type
be used:

/*
 * Use this typedef when you need to represent a UTF-16 surrogate pair
 * as single unsigned integer.
             ^^^^^^^^
 */
#if SIZEOF_INT >= 4
typedef unsigned int Py_UCS4;
#elif SIZEOF_LONG >= 4
typedef unsigned long Py_UCS4;
#endif

If someone needs to boost that to a 64-bit int someday (hard to imagine ...),
they can boost it to an unsigned 64-bit int just as well.

If you really need to cater to impossibilities <0.5 wink>, #define a
Py_UCS4_IN_RANGE macro next to the typedef, and use the macro instead.




From gmcm at hypernet.com  Sun Mar  4 16:54:50 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sun, 4 Mar 2001 10:54:50 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMHJDAA.tim.one@home.com>
References: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
Message-ID: <3AA21EFA.30660.4C134459@localhost>

[Tim justifies one-release-back mentality]
> You're a Giant Corporation that ships a multi-platform product,
> including Python 2.0.  Since your IT dept is frightened of its
> own shadow, they won't move to 2.1.  Since there is no bound to
> your greed, you figure that even if there are only a dozen MacOS
> X users in the world, you could make 10 bucks off of them if only
> you can talk PythonLabs into treating the lack of 2.0 MacOS X
> support as "a bug", getting PythonLabs to backstitch the port
> into a 2.0 follow-on (*calling* it 2.0.x serves to pacify your IT
> paranoids).  No cost to you, and 10 extra dollars in your pocket.
>  Everyone wins <wink>.

There is a curious psychology involved. I've noticed that a 
significant number of people (roughly 30%) always download 
an older release.

Example: Last week I announced a new release (j) of Installer. 
70% of the downloads were for that release.

There is only one previous Python 2 version of Installer 
available, but of people downloading a Python 2 version, 17% 
chose the older (I always send people to the html page, and 
none of the referrers shows a direct link - so this was a 
concious decision).

Of people downloading a 1.5.2 release (15% of total), 69% 
chose the latest, and 31% chose an older. This is the stable 
pattern (the fact that 83% of Python 2 users chose the latest 
is skewed by the fact that this was the first week it was 
available).

Since I yank a release if it turns out to introduce bugs, these 
people are not downloading older because they've heard it 
"works better". The interface has hardly changed in the entire 
span of available releases, so these are not people avoiding 
learning something new.

These are people who are simply highly resistent to anything 
new, with no inclination to test their assumptions against 
reality.

As Guido said, Republicans :-). 


- Gordon



From thomas at xs4all.net  Mon Mar  5 01:16:55 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 5 Mar 2001 01:16:55 +0100
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Mar 03, 2001 at 02:10:35PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
Message-ID: <20010305011655.V9678@xs4all.nl>

On Sat, Mar 03, 2001 at 02:10:35PM -0500, Guido van Rossum wrote:

> But I understand that you're saying that the community has grown so
> conservative that it can't stand new features even if they *are* fully
> backwards compatible.

There is an added dimension, especially with Python. Bugs in the new
features. If it entails changes in the compiler or VM (like import-as, which
changed the meaning of FROM_IMPORT and added a IMPORT_STAR opcode) or if
modules get augmented to use the new features, these changes can introduce
bugs into existing code that doesn't even use the new features itself.

> I wonder, does that extend to new library modules?  Is there also
> resistance against the growth there?  I don't think so -- if anything,
> people are clamoring for more stuff to become standard (while at the
> same time I feel some pressure to cut dead wood, like the old SGI
> multimedia modules).

No (yes), bugfix releases should fix bugs, not add features (nor remove
them). Modules in the std lib are just features.

> So that relegates us at PythonLabs to a number of things: coding new
> modules (boring), or trying to improve performance of the virtual
> machine (equally boring, and difficult to boot), or fixing bugs (did I
> mention boring? :-).

How can you say this ? Okay, so *fixing* bugs isn't terribly exciting, but
hunting them down is one of the best sports around. Same for optimizations:
rewriting the code might be boring (though if you are a fast typist, it
usually doesn't take long enough to get boring :) but thinking them up is
the fun part. 

But who said PythonLabs had to do all the work ? You guys didn't do all the
work in 2.0->2.1, did you ? Okay, so most of the major features are written
by PythonLabs, and most of the decisions are made there, but there's no real
reason for it. Consider the Linux kernel: Linus Torvalds releases the
kernels in the devel 'tree' and usually the first few kernels in the
'stable' tree, and then Alan Cox takes over the stable tree and continues
it. (Note that this analogy isn't quite correct: the stable tree often
introduces new features, new drivers, etc, but avoids real incompatibilites
and usually doesn't require extra upgrades of tools and such.)

I hope you don't think any less of me if I volunteer *again* :-) but I'm
perfectly willing to maintain the bugfix release(s). I also don't think we
should necessarily stay at a single bugfix release. Whether or not a 'beta'
for the bugfix release is necessary, I'm not sure. I don't think so, at
least not if you release multiple bugfix releases. 

Holiday-Greetings-from-Long-Beach-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at alum.mit.edu  Sun Mar  4 00:32:32 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sat, 3 Mar 2001 18:32:32 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <20010305011655.V9678@xs4all.nl>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<200103031910.OAA21663@cj20424-a.reston1.va.home.com>
	<20010305011655.V9678@xs4all.nl>
Message-ID: <15009.32528.29406.232901@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

  [GvR:]
  >> So that relegates us at PythonLabs to a number of things: coding
  >> new modules (boring), or trying to improve performance of the
  >> virtual machine (equally boring, and difficult to boot), or
  >> fixing bugs (did I mention boring? :-).

  TW> How can you say this ? Okay, so *fixing* bugs isn't terribly
  TW> exciting, but hunting them down is one of the best sports
  TW> around. Same for optimizations: rewriting the code might be
  TW> boring (though if you are a fast typist, it usually doesn't take
  TW> long enough to get boring :) but thinking them up is the fun
  TW> part.

  TW> But who said PythonLabs had to do all the work ? You guys didn't
  TW> do all the work in 2.0->2.1, did you ? Okay, so most of the
  TW> major features are written by PythonLabs, and most of the
  TW> decisions are made there, but there's no real reason for
  TW> it.

Most of the work I did for Python 2.0 was fixing bugs.  It was a lot
of fairly tedious but necessary work.  I have always imagined that
this was work that most people wouldn't do unless they were paid to do
it.  (python-dev seems to have a fair number of exceptions, though.)

Working on major new features has a lot more flash, so I imagine that
volunteers would be more inclined to help.  Neil's work on GC or yours
on augmented assignment are examples.

There's nothing that says we have to do all the work.  In fact, I
imagine we'll continue to collectively spend a lot of time on
maintenance issues.  We get paid to do it, and we get to hack on Zope
and ZODB the rest of the time, which is also a lot of fun.

Jeremy



From jack at oratrix.nl  Mon Mar  5 11:47:17 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 05 Mar 2001 11:47:17 +0100
Subject: [Python-Dev] os module UserDict
Message-ID: <20010305104717.A5104373C95@snelboot.oratrix.nl>

Importing os has started failing on the Mac since the riscos mods are in 
there, it tries to use UserDict without having imported it first.

I think that the problem is that the whole _Environ stuff should be inside the 
else part of the try/except, but I'm not sure I fully understand what goes on. 
Could whoever did these mods have a look?

Also, it seems that the whole if name != "riscos" is a bit of a hack...
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++





From phil at river-bank.demon.co.uk  Mon Mar  5 17:15:13 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Mon, 05 Mar 2001 16:15:13 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
Message-ID: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>

Any chance of the attached small patch be applied to enable weak
references to functions?

It's particularly useful for lambda functions and closes the "very last
loophole where a programmer can cause a PyQt script to seg fault" :)

Phil
-------------- next part --------------
diff -ruN Python-2.1b1.orig/Include/funcobject.h Python-2.1b1/Include/funcobject.h
--- Python-2.1b1.orig/Include/funcobject.h	Thu Jan 25 20:06:58 2001
+++ Python-2.1b1/Include/funcobject.h	Mon Mar  5 13:00:58 2001
@@ -16,6 +16,7 @@
     PyObject *func_doc;
     PyObject *func_name;
     PyObject *func_dict;
+    PyObject *func_weakreflist;
 } PyFunctionObject;
 
 extern DL_IMPORT(PyTypeObject) PyFunction_Type;
diff -ruN Python-2.1b1.orig/Objects/funcobject.c Python-2.1b1/Objects/funcobject.c
--- Python-2.1b1.orig/Objects/funcobject.c	Thu Mar  1 06:06:37 2001
+++ Python-2.1b1/Objects/funcobject.c	Mon Mar  5 13:39:37 2001
@@ -245,6 +245,8 @@
 static void
 func_dealloc(PyFunctionObject *op)
 {
+	PyObject_ClearWeakRefs((PyObject *) op);
+
 	PyObject_GC_Fini(op);
 	Py_DECREF(op->func_code);
 	Py_DECREF(op->func_globals);
@@ -336,4 +338,7 @@
 	Py_TPFLAGS_DEFAULT | Py_TPFLAGS_GC, /*tp_flags*/
 	0,		/* tp_doc */
 	(traverseproc)func_traverse,	/* tp_traverse */
+	0,		/* tp_clear */
+	0,		/* tp_richcompare */
+	offsetof(PyFunctionObject, func_weakreflist)	/* tp_weaklistoffset */
 };

From thomas at xs4all.net  Tue Mar  6 00:28:50 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 6 Mar 2001 00:28:50 +0100
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>; from phil@river-bank.demon.co.uk on Mon, Mar 05, 2001 at 04:15:13PM +0000
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
Message-ID: <20010306002850.B9678@xs4all.nl>

On Mon, Mar 05, 2001 at 04:15:13PM +0000, Phil Thompson wrote:

> Any chance of the attached small patch be applied to enable weak
> references to functions?

It's probably best to upload it to SourceForge, even though it seems pretty
broken right now. Especially during the Python conference, posts are
terribly likely to fall into oblivion.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From skip at mojam.com  Tue Mar  6 01:33:05 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:33:05 -0600 (CST)
Subject: [Python-Dev] Who wants this GCC/Solaris bug report?
Message-ID: <15012.12353.311124.819970@beluga.mojam.com>

I was assigned the following bug report:

   http://sourceforge.net/tracker/?func=detail&aid=232787&group_id=5470&atid=105470

I made a pass through the code in question, made one change to posixmodule.c
that I thought appropriate (should squelch one warning) and some comments
about the other warnings.  I'm unable to actually test any changes since I
don't run Solaris, so I don't feel comfortable doing anything more.  Can
someone else take this one over?  In theory, my comments should help you
zero in on a fix faster (famous last words).

Skip




From skip at mojam.com  Tue Mar  6 01:41:50 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:41:50 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
References: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
Message-ID: <15012.12878.853762.563753@beluga.mojam.com>

    Tim> Note there was never a bugfix release for 1.5.2, despite that 1.5.2
    Tim> had some serious bugs, and that 1.5.2 was current for an
    Tim> unprecedentedly long time.  Guido put out a call for volunteers to
    Tim> produce a 1.5.2 bugfix release, but nobody responded.  Past is
    Tim> prelude ...

Yes, but 1.5.2 source was managed differently.  It was released while the
source was still "captive" to CNRI and the conversion to Sourceforge was
relatively speaking right before the 2.0 release and had the added
complication that it more-or-less coincided with the formation of
PythonLabs.  With the source tree where someone can easily branch it, I
think it's now feasible to create a bug fix branch and have someone
volunteer to manage additions to it (that is, be the filter that decides if
a code change is a bug fix or a new feature).

Skip



From skip at mojam.com  Tue Mar  6 01:48:33 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:48:33 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103032155.QAA05049@panix3.panix.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<mailman.983646726.27322.python-list@python.org>
	<200103032155.QAA05049@panix3.panix.com>
Message-ID: <15012.13281.629270.275993@beluga.mojam.com>

    aahz> Yup.  The idea is that because it's always an N and N-1 pair, the
    aahz> base code is the same for both and applying patches to both should
    aahz> be (relatively speaking) a small amount of extra work.  Most of
    aahz> the work lies in deciding *which* patches should go into N-1.

The only significant problem I see is making sure submitted patches contain
just bug fixes or new features and not a mixture of the two.

    aahz> The main reason I suggested two betas was to "lockstep" the bugfix
    aahz> release to the next version's feature release.

I don't see any real reason to sync them.  There's no particular reason I
can think of why you couldn't have 2.1.1, 2.1.2 and 2.1.3 releases before
2.2.0 is released and not have any bugfix release coincident with 2.2.0.
Presumably, any bug fixes between the release of 2.1.3 and 2.2.0 would also
appear in the feature branch.  As long as there was someone willing to
manage a particular bug fix branch, such a branch could continue for a
relatively long ways, long past the next feature release.

Skip




From skip at mojam.com  Tue Mar  6 01:53:38 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:53:38 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <3AA21EFA.30660.4C134459@localhost>
References: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
	<3AA21EFA.30660.4C134459@localhost>
Message-ID: <15012.13586.201583.620776@beluga.mojam.com>

    Gordon> There is a curious psychology involved. I've noticed that a
    Gordon> significant number of people (roughly 30%) always download an
    Gordon> older release.

    Gordon> Example: Last week I announced a new release (j) of Installer.
    Gordon> 70% of the downloads were for that release.

    ...

    Gordon> Of people downloading a 1.5.2 release (15% of total), 69% 
    Gordon> chose the latest, and 31% chose an older. This is the stable 
    Gordon> pattern (the fact that 83% of Python 2 users chose the latest 
    Gordon> is skewed by the fact that this was the first week it was 
    Gordon> available).

Check your web server's referral logs.  I suspect a non-trivial fraction of
those 30% were coming via offsite links such as search engine referrals and
weren't even aware a new installer was available.

Skip



From gmcm at hypernet.com  Tue Mar  6 03:09:38 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 5 Mar 2001 21:09:38 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15012.13586.201583.620776@beluga.mojam.com>
References: <3AA21EFA.30660.4C134459@localhost>
Message-ID: <3AA40092.13561.536C8052@localhost>

>     Gordon> Of people downloading a 1.5.2 release (15% of total),
>     69% Gordon> chose the latest, and 31% chose an older. This is
>     the stable Gordon> pattern (the fact that 83% of Python 2
>     users chose the latest Gordon> is skewed by the fact that
>     this was the first week it was Gordon> available).
[Skip] 
> Check your web server's referral logs.  I suspect a non-trivial
> fraction of those 30% were coming via offsite links such as
> search engine referrals and weren't even aware a new installer
> was available.

That's the whole point - these stats are from the referrals. My 
download directory is not indexed or browsable. I only 
announce the page with the download links on it. And sure 
enough, all downloads come from there.

- Gordon



From fdrake at acm.org  Mon Mar  5 17:15:27 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon, 5 Mar 2001 11:15:27 -0500 (EST)
Subject: [Python-Dev] XML runtime errors?
In-Reply-To: <01f701c01d05$0aa98e20$766940d5@hagrid>
References: <009601c01cf1$467458e0$766940d5@hagrid>
	<200009122155.QAA01452@cj20424-a.reston1.va.home.com>
	<01f701c01d05$0aa98e20$766940d5@hagrid>
Message-ID: <15011.48031.772007.248246@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > how about adding:
 > 
 >     class XMLError(RuntimeError):
 >         pass

  Looks like someone already added Error for this.

 > > > what's wrong with "SyntaxError"?
 > > 
 > > That would be the wrong exception unless it's parsing Python source
 > > code.
 > 
 > gotta fix netrc.py then...

  And this still isn't done.  I've made changes in my working copy,
introducting a specific exception which carries useful information
(msg, filename, lineno), so that all syntax exceptions get this
information as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From martin at loewis.home.cs.tu-berlin.de  Tue Mar  6 08:22:58 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 6 Mar 2001 08:22:58 +0100
Subject: [Python-Dev] os module UserDict
Message-ID: <200103060722.f267Mwe01222@mira.informatik.hu-berlin.de>

> I think that the problem is that the whole _Environ stuff should be
> inside the else part of the try/except, but I'm not sure I fully
> understand what goes on.  Could whoever did these mods have a look?

I agree that this patch was broken; the _Environ stuff was in the else
part before. The change was committed by gvanrossum; the checkin
comment says that its author was dschwertberger. 

> Also, it seems that the whole if name != "riscos" is a bit of a
> hack...

I agree. What it seems to say is 'even though riscos does have a
putenv, we cannot/should not/must not wrap environ with a UserDict.'

I'd suggest to back-out this part of the patch, unless a consistent
story can be given RSN.

Regards,
Martin

P.S. os.py mentions an "import riscos". Where is that module?



From jack at oratrix.nl  Tue Mar  6 14:31:12 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 06 Mar 2001 14:31:12 +0100
Subject: [Python-Dev] __all__ in urllib
Message-ID: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>

The __all__ that was added to urllib recently causes me quite a lot of grief 
(This is "me the application programmer", not "me the macpython maintainer"). 
I have a module that extends urllib, and whereas what used to work was a 
simple "from urllib import *" plus a few override functions, but with this 
__all__ stuff that doesn't work anymore.

I started fixing up __all__, but then I realised that this is probably not the 
right solution. "from xxx import *" can really be used for two completely 
distinct cases. One is as a convenience, where the user doesn't want to prefix 
all references with xxx. but the other distinct case is in a module that is an 
extension of another module. In this second case you would really want to 
bypass this whole __all__ mechanism.

I think that the latter is a valid use case for import *, and that there 
should be some way to get this behaviour.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++





From skip at mojam.com  Tue Mar  6 14:51:49 2001
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 6 Mar 2001 07:51:49 -0600 (CST)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
Message-ID: <15012.60277.150431.237935@beluga.mojam.com>

    Jack> I started fixing up __all__, but then I realised that this is
    Jack> probably not the right solution. 

    Jack> One is as a convenience, where the user doesn't want to prefix all
    Jack> references with xxx. but the other distinct case is in a module
    Jack> that is an extension of another module. In this second case you
    Jack> would really want to bypass this whole __all__ mechanism.

    Jack> I think that the latter is a valid use case for import *, and that
    Jack> there should be some way to get this behaviour.

Two things come to mind.  One, perhaps a more careful coding of urllib to
avoid exposing names it shouldn't export would be a better choice.  Two,
perhaps those symbols that are not documented but that would be useful when
extending urllib functionality should be documented and added to __all__.

Here are the non-module names I didn't include in urllib.__all__:

    MAXFTPCACHE
    localhost
    thishost
    ftperrors
    noheaders
    ftpwrapper
    addbase
    addclosehook
    addinfo
    addinfourl
    basejoin
    toBytes
    unwrap
    splittype
    splithost
    splituser
    splitpasswd
    splitport
    splitnport
    splitquery
    splittag
    splitattr
    splitvalue
    splitgophertype
    always_safe
    getproxies_environment
    getproxies
    getproxies_registry
    test1
    reporthook
    test
    main

None are documented, so there are no guarantees if you use them (I have
subclassed addinfourl in the past myself).

Skip



From sjoerd at oratrix.nl  Tue Mar  6 17:19:11 2001
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Tue, 06 Mar 2001 17:19:11 +0100
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of Fri, 02 Mar 2001 09:22:27 -0500.
             <200103021422.JAA06497@cj20424-a.reston1.va.home.com> 
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com> 
Message-ID: <20010306161912.54E9A301297@bireme.oratrix.nl>

At the meeting of W3C working groups last week in Cambridge, MA, I saw
that he used Python...

On Fri, Mar 2 2001 Guido van Rossum wrote:

> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88
> 
> Most quotable part: "Python is a language you can get into on one
> battery!"
> 
> We should be able to use that for PR somewhere...
> 
> --Guido van Rossum (home page: http://www.python.org/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From dietmar at schwertberger.de  Tue Mar  6 23:54:30 2001
From: dietmar at schwertberger.de (Dietmar Schwertberger)
Date: Tue, 6 Mar 2001 23:54:30 +0100 (GMT)
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <200103060722.f267Mwe01222@mira.informatik.hu-berlin.de>
Message-ID: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>

Hi Martin,

thanks for CC'ing to me.

On Tue 06 Mar, Martin v. Loewis wrote:
> > I think that the problem is that the whole _Environ stuff should be
> > inside the else part of the try/except, but I'm not sure I fully
> > understand what goes on.  Could whoever did these mods have a look?
> 
> I agree that this patch was broken; the _Environ stuff was in the else
> part before. The change was committed by gvanrossum; the checkin
> comment says that its author was dschwertberger. 
Yes, it's from me. Unfortunately a whitespace problem with me, my editor
and my diffutils required Guido to apply most of the patches manually...


> > Also, it seems that the whole if name != "riscos" is a bit of a
> > hack...
> 
> I agree. What it seems to say is 'even though riscos does have a
> putenv, we cannot/should not/must not wrap environ with a UserDict.'
> 
> I'd suggest to back-out this part of the patch, unless a consistent
> story can be given RSN.
In plat-riscos there is a different UserDict-like implementation of
environ which is imported at the top of os.py in the 'riscos' part.
'name != "riscos"' just avoids overriding this. Maybe it would have
been better to include riscosenviron._Environ into os.py, as this would
look - and be - less hacky?
I must admit, I didn't care much when I started with riscosenviron.py
by just copying UserDict.py last year.

The RISC OS implementation doesn't store any data itself but just
emulates a dictionary with getenv() and putenv().
This is more suitable for the use of the environment under RISC OS, as
it is used quite heavily for a lot of configuration data and may grow
to some hundred k quite easily. So it is undesirable to import all the
data at startup if it is not required really.
Also the environment is being used for communication between tasks
sometimes (the changes don't just affect subprocesses started later,
but all tasks) and so read access to environ should return the current
value.


And this is just _one_ of the points where RISC OS is different from
the rest of the world...


> Regards,
> Martin
> 
> P.S. os.py mentions an "import riscos". Where is that module?
riscosmodule.c lives in the RISCOS subdirectory together with all the
other RISC OS specific stuff needed for building the binaries.


Regards,

Dietmar

P.S.: How can I subscribe to python-dev (at least read-only)?
      I couldn't find a reference on python.org or Sourceforge.
P.P.S.: If you wonder what RISC OS is and why it is different:
        You may remember the 'Archimedes' from the british
        manufacturer Acorn. This was the first RISC OS computer...




From martin at loewis.home.cs.tu-berlin.de  Wed Mar  7 07:38:52 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 7 Mar 2001 07:38:52 +0100
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>
	(message from Dietmar Schwertberger on Tue, 6 Mar 2001 23:54:30 +0100
	(GMT))
References: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>
Message-ID: <200103070638.f276cqj01518@mira.informatik.hu-berlin.de>

> Yes, it's from me. Unfortunately a whitespace problem with me, my editor
> and my diffutils required Guido to apply most of the patches manually...

I see. What do you think about the patch included below? It also gives
you the default argument to os.getenv, which riscosmodule does not
have.

> In plat-riscos there is a different UserDict-like implementation of
> environ which is imported at the top of os.py in the 'riscos' part.
> 'name != "riscos"' just avoids overriding this. Maybe it would have
> been better to include riscosenviron._Environ into os.py, as this would
> look - and be - less hacky?

No, I think it is good to have the platform-specific code in platform
modules, and only merge them appropiately in os.py.

> P.S.: How can I subscribe to python-dev (at least read-only)?

You can't; it is by invitation only. You can find the archives at

http://mail.python.org/pipermail/python-dev/

Regards,
Martin

Index: os.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/os.py,v
retrieving revision 1.46
diff -u -r1.46 os.py
--- os.py	2001/03/06 15:26:07	1.46
+++ os.py	2001/03/07 06:31:34
@@ -346,17 +346,19 @@
     raise exc, arg
 
 
-if name != "riscos":
-    # Change environ to automatically call putenv() if it exists
-    try:
-        # This will fail if there's no putenv
-        putenv
-    except NameError:
-        pass
-    else:
-        import UserDict
+# Change environ to automatically call putenv() if it exists
+try:
+    # This will fail if there's no putenv
+    putenv
+except NameError:
+    pass
+else:
+    import UserDict
 
-    if name in ('os2', 'nt', 'dos'):  # Where Env Var Names Must Be UPPERCASE
+    if name == "riscos":
+        # On RISC OS, all env access goes through getenv and putenv
+        from riscosenviron import _Environ
+    elif name in ('os2', 'nt', 'dos'):  # Where Env Var Names Must Be UPPERCASE
         # But we store them as upper case
         class _Environ(UserDict.UserDict):
             def __init__(self, environ):
Index: plat-riscos/riscosenviron.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/plat-riscos/riscosenviron.py,v
retrieving revision 1.1
diff -u -r1.1 riscosenviron.py
--- plat-riscos/riscosenviron.py	2001/03/02 05:55:07	1.1
+++ plat-riscos/riscosenviron.py	2001/03/07 06:31:34
@@ -3,7 +3,7 @@
 import riscos
 
 class _Environ:
-    def __init__(self):
+    def __init__(self, initial = None):
         pass
     def __repr__(self):
         return repr(riscos.getenvdict())



From dietmar at schwertberger.de  Wed Mar  7 09:44:54 2001
From: dietmar at schwertberger.de (Dietmar Schwertberger)
Date: Wed, 7 Mar 2001 09:44:54 +0100 (GMT)
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <200103070638.f276cqj01518@mira.informatik.hu-berlin.de>
Message-ID: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>

On Wed 07 Mar, Martin v. Loewis wrote:
> > Yes, it's from me. Unfortunately a whitespace problem with me, my editor
> > and my diffutils required Guido to apply most of the patches manually...
> 
> I see. What do you think about the patch included below? It also gives
> you the default argument to os.getenv, which riscosmodule does not
> have.
Yes, looks good. Thanks.
Please don't forget to replace the 'from riscosenviron import...' statement
from the riscos section at the start of os.py with an empty 'environ' as
there is no environ in riscosmodule.c:
(The following patch also fixes a bug: 'del ce' instead of 'del riscos')

=========================================================================
*diff -c Python-200:$.Python-2/1b1.Lib.os/py SCSI::SCSI4.$.AcornC_C++.Python.!Python.Lib.os/py 
*** Python-200:$.Python-2/1b1.Lib.os/py Fri Mar  2 07:04:51 2001
--- SCSI::SCSI4.$.AcornC_C++.Python.!Python.Lib.os/py Wed Mar  7 08:31:33 2001
***************
*** 160,170 ****
      import riscospath
      path = riscospath
      del riscospath
!     from riscosenviron import environ
  
      import riscos
      __all__.extend(_get_exports_list(riscos))
!     del ce
  
  else:
      raise ImportError, 'no os specific module found'
--- 160,170 ----
      import riscospath
      path = riscospath
      del riscospath
!     environ = {}
  
      import riscos
      __all__.extend(_get_exports_list(riscos))
!     del riscos
  
  else:
      raise ImportError, 'no os specific module found'
========================================================================

If you change riscosenviron.py, would you mind replacing 'setenv' with
'putenv'? It seems '__setitem__' has never been tested...


Regards,

Dietmar




From martin at loewis.home.cs.tu-berlin.de  Wed Mar  7 10:11:46 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 7 Mar 2001 10:11:46 +0100
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>
	(message from Dietmar Schwertberger on Wed, 7 Mar 2001 09:44:54 +0100
	(GMT))
References: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>
Message-ID: <200103070911.f279Bks02780@mira.informatik.hu-berlin.de>

> Please don't forget to replace the 'from riscosenviron import...' statement
> from the riscos section at the start of os.py with an empty 'environ' as
> there is no environ in riscosmodule.c:

There used to be one in riscosenviron, which you had imported. I've
deleted the entire import (trusting that environ will be initialized
later on); and removed the riscosenviron.environ, which now only has
the _Environ class.

> (The following patch also fixes a bug: 'del ce' instead of 'del riscos')

That change was already applied (probably Guido caught the error when
editing the change in).

> If you change riscosenviron.py, would you mind replacing 'setenv' with
> 'putenv'? It seems '__setitem__' has never been tested...

Done.

Martin



From greg at cosc.canterbury.ac.nz  Thu Mar  8 05:06:20 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 08 Mar 2001 17:06:20 +1300 (NZDT)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
Message-ID: <200103080406.RAA04034@s454.cosc.canterbury.ac.nz>

Jack Jansen <jack at oratrix.nl>:

> but the other distinct case is in a module that is an 
> extension of another module. In this second case you would really want to 
> bypass this whole __all__ mechanism.
> 
> I think that the latter is a valid use case for import *, and that there 
> should be some way to get this behaviour.

How about:

  from foo import **

meaning "give me ALL the stuff in module foo, no, really,
I MEAN it" (possibly even including _ names).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Fri Mar  9 00:20:57 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 9 Mar 2001 00:20:57 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/sandbox test.txt,1.1,NONE
In-Reply-To: <E14b2wY-0005VS-00@usw-pr-cvs1.sourceforge.net>; from jackjansen@users.sourceforge.net on Thu, Mar 08, 2001 at 08:07:10AM -0800
References: <E14b2wY-0005VS-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <20010309002057.H404@xs4all.nl>

On Thu, Mar 08, 2001 at 08:07:10AM -0800, Jack Jansen wrote:

> Testing SSH access from the Mac with MacCVS Pro. It seems to work:-)

Oh boy oh boy! Does that mean you'll merge the MacPython tree into the
normal CVS tree ? Don't forget to assign the proper rights to the PSF :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at acm.org  Thu Mar  8 09:28:43 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Thu, 8 Mar 2001 03:28:43 -0500 (EST)
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
Message-ID: <15015.17083.582010.93308@localhost.localdomain>

Phil Thompson writes:
 > Any chance of the attached small patch be applied to enable weak
 > references to functions?
 > 
 > It's particularly useful for lambda functions and closes the "very last
 > loophole where a programmer can cause a PyQt script to seg fault" :)

Phil,
  Can you explain how this would help with the memory issues?  I'd
like to have a better idea of how this would make things work right.
Are there issues with the cyclic GC with respect to the Qt/KDE
bindings?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From phil at river-bank.demon.co.uk  Sat Mar 10 02:20:56 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 01:20:56 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain>
Message-ID: <3AA98178.35B0257D@river-bank.demon.co.uk>

"Fred L. Drake, Jr." wrote:
> 
> Phil Thompson writes:
>  > Any chance of the attached small patch be applied to enable weak
>  > references to functions?
>  >
>  > It's particularly useful for lambda functions and closes the "very last
>  > loophole where a programmer can cause a PyQt script to seg fault" :)
> 
> Phil,
>   Can you explain how this would help with the memory issues?  I'd
> like to have a better idea of how this would make things work right.
> Are there issues with the cyclic GC with respect to the Qt/KDE
> bindings?

Ok, some background...

Qt implements a component model for its widgets. You build applications
by sub-classing the standard widgets and then "connect" them together.
Connections are made between signals and slots - both are defined as
class methods. Connections perform the same function as callbacks in
more traditional GUI toolkits like Xt. Signals/slots have the advantage
of being type safe and the resulting component model is very powerful -
it encourages class designers to build functionally rich component
interfaces.

PyQt supports this model. It also allows slots to be any Python callable
object - usually a class method. You create a connection between a
signal and slot using the "connect" method of the QObject class (from
which all objects that have signals or slots are derived). connect()
*does not* increment the reference count of a slot that is a Python
callable object. This is a design decision - earlier versions did do
this but it almost always results in circular reference counts. The
downside is that, if the slot object no longer exists when the signal is
emitted (because the programmer has forgotten to keep a reference to the
class instance alive) then the usual result is a seg fault. These days,
this is the only way a PyQt programmer can cause a seg fault with bad
code (famous last words!). This accounts for 95% of PyQt programmer's
problem reports.

With Python v2.1, connect() creates a weak reference to the Python
callable slot. When the signal is emitted, PyQt (actually it's SIP)
finds out that the callable has disappeared and takes care not to cause
the seg fault. The problem is that v2.1 only implements weak references
for class instance methods - not for all callables.

Most of the time callables other than instance methods are fairly fixed
- they are unlikely to disappear - not many scripts start deleting
function definitions. The exception, however, is lambda functions. It is
sometimes convenient to define a slot as a lambda function in order to
bind an extra parameter to the slot. Obviously lambda functions are much
more transient than regular functions - a PyQt programmer can easily
forget to make sure a reference to the lambda function stays alive. The
patch I proposed gives the PyQt programmer the same protection for
lambda functions as Python v2.1 gives them for class instance methods.

To be honest, I don't see why weak references have been implemented as a
bolt-on module that only supports one particular object type. The thing
I most like about the Python implementation is how consistent it is.
Weak references should be implemented for every object type - even for
None - you never know when it might come in useful.

As far as cyclic GC is concerned - I've ignored it completely, nobody
has made any complaints - so it either works without any problems, or
none of my user base is using it.

Phil



From skip at mojam.com  Sat Mar 10 02:49:04 2001
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 9 Mar 2001 19:49:04 -0600 (CST)
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA98178.35B0257D@river-bank.demon.co.uk>
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
	<15015.17083.582010.93308@localhost.localdomain>
	<3AA98178.35B0257D@river-bank.demon.co.uk>
Message-ID: <15017.34832.44442.981293@beluga.mojam.com>

    Phil> This is a design decision - earlier versions did do this but it
    Phil> almost always results in circular reference counts. 

With cyclic GC couldn't you just let those circular reference counts occur
and rely on the GC machinery to break the cycles?  Or do you have __del__
methods? 

Skip



From paulp at ActiveState.com  Sat Mar 10 03:19:41 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 09 Mar 2001 18:19:41 -0800
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain> <3AA98178.35B0257D@river-bank.demon.co.uk>
Message-ID: <3AA98F3D.E01AD657@ActiveState.com>

Phil Thompson wrote:
> 
>...
> 
> To be honest, I don't see why weak references have been implemented as a
> bolt-on module that only supports one particular object type. The thing
> I most like about the Python implementation is how consistent it is.
> Weak references should be implemented for every object type - even for
> None - you never know when it might come in useful.

Weak references add a pointer to each object. This could add up for
(e.g.) integers. The idea is that you only pay the cost of weak
references for objects that you would actually create weak references
to.

-- 
Python:
    Programming the way
    Guido
    indented it.



From sales at tooltoad.com  Sat Mar 10 06:21:28 2001
From: sales at tooltoad.com (www.tooltoad.com)
Date: Sat, 10 Mar 2001 00:21:28 -0500
Subject: [Python-Dev] GRAND OPENING     www.tooltoad.com     GRAND OPENING
Message-ID: <0G9Y00LRDUP62P@mta6.srv.hcvlny.cv.net>

www.tooltoad.com      www.tooltoad.com     www.tooltoad.com

HELLO ,
  
    Please visit us at the GRAND OPENING of www.tooltoad.com

Come and see our ROCK BOTTOM pennies on the dollar pricing . We sell 

electronics , housewares  , security items , tools , and much more . 



    			THANK YOU 
				The management





From phil at river-bank.demon.co.uk  Sat Mar 10 12:06:13 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 11:06:13 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain> <3AA98178.35B0257D@river-bank.demon.co.uk> <3AA98F3D.E01AD657@ActiveState.com>
Message-ID: <3AAA0AA5.1E6983C2@river-bank.demon.co.uk>

Paul Prescod wrote:
> 
> Phil Thompson wrote:
> >
> >...
> >
> > To be honest, I don't see why weak references have been implemented as a
> > bolt-on module that only supports one particular object type. The thing
> > I most like about the Python implementation is how consistent it is.
> > Weak references should be implemented for every object type - even for
> > None - you never know when it might come in useful.
> 
> Weak references add a pointer to each object. This could add up for
> (e.g.) integers. The idea is that you only pay the cost of weak
> references for objects that you would actually create weak references
> to.

Yes I know, and I'm suggesting that people will always find extra uses
for things which the original designers hadn't thought of. Better to be
consistent (and allow weak references to anything) than try and
anticipate (wrongly) how people might want to use it in the future -
although I appreciate that the implementation cost might be too high.
Perhaps the question should be "what types make no sense with weak
references" and exclude them rather than "what types might be able to
use weak references" and include them.

Having said that, my only immediate requirement is to allow weak
refences to functions, and I'd be happy if only that was implemented.

Phil



From phil at river-bank.demon.co.uk  Sat Mar 10 12:06:07 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 11:06:07 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
			<15015.17083.582010.93308@localhost.localdomain>
			<3AA98178.35B0257D@river-bank.demon.co.uk> <15017.34832.44442.981293@beluga.mojam.com>
Message-ID: <3AAA0A9F.FBDE0719@river-bank.demon.co.uk>

Skip Montanaro wrote:
> 
>     Phil> This is a design decision - earlier versions did do this but it
>     Phil> almost always results in circular reference counts.
> 
> With cyclic GC couldn't you just let those circular reference counts occur
> and rely on the GC machinery to break the cycles?  Or do you have __del__
> methods?

Remember I'm ignorant when it comes to cyclic GC - PyQt is older and I
didn't pay much attention to it when it was introduced, so I may be
missing a trick. One thing though, if you have a dialog display and have
a circular reference to it, then you del() the dialog instance - when
will the GC actually get around to resolving the circular reference and
removing the dialog from the screen? It must be guaranteed to do so
before the Qt event loop is re-entered.

Every PyQt class has a __del__ method (because I need to control the
order in which instance "variables" are deleted).

Phil



From guido at digicool.com  Sat Mar 10 21:08:25 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 10 Mar 2001 15:08:25 -0500
Subject: [Python-Dev] Looking for a (paid) reviewer of Python code
Message-ID: <200103102008.PAA05543@cj20424-a.reston1.va.home.com>

I received the mail below; apparently Mr. Amon's problem is that he
needs someone to review a Python program that he ordered written
before he pays the programmer.  Mr. Amon will pay for the review and
has given me permission to forward his message here.  Please write
him at <lramon at earthlink.net>.

--Guido van Rossum (home page: http://www.python.org/~guido/)

------- Forwarded Message

Date:    Wed, 07 Mar 2001 10:58:04 -0500
From:    "Larry Amon" <lramon at earthlink.net>
To:      <guido at python.org>
Subject: Python programs

Hi Guido,

    My name is Larry Amon and I am the President/CEO of SurveyGenie.com. We
have had a relationship with a programmer at Harvard who has been using
Python as his programming language of choice. He tells us that he has this
valuable program that he has developed in Python. Our problem is that we
don't know anyone who knows Python that would be able to verify his claim.
We have funded this guy with our own hard earned money and now he is holding
his program hostage. He is willing to make a deal, but we need to know if
his program is worth anything.

    Do you have any suggestions? You can reach me at lramon at earthlink.net or
you can call me at 941 593 8250.


Regards
Larry Amon
CEO SurveyGenie.com

------- End of Forwarded Message




From pedroni at inf.ethz.ch  Sun Mar 11 03:11:34 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 11 Mar 2001 03:11:34 +0100
Subject: [Python-Dev] nested scopes and global: some corner cases
Message-ID: <005c01c0a9d0$99ff21e0$ae5821c0@newmexico>

Hi.

Writing nested scopes support for jython (now it passes test_scope and
test_future <wink>),
I have come across these further corner cases for nested scopes mixed with
global decl,
I have tried them with python 2.1b1 and I wonder if the results are consistent
with
the proposed rule:
a free variable is bound according to the nearest outer scope binding
(assign-like or global decl),
class scopes (for backw-comp) are ignored wrt this.

(I)
from __future__ import nested_scopes

x='top'
def ta():
 global x
 def tata():
  exec "x=1" in locals()
  return x # LOAD_NAME
 return tata

print ta()() prints 1, I believed it should print 'top' and a LOAD_GLOBAL
should have been produced.
In this case the global binding is somehow ignored. Note: putting a global decl
in tata xor removing
the exec make tata deliver 'top' as I expected (LOAD_GLOBALs are emitted).
Is this a bug or I'm missing something?

(II)
from __future__ import nested_scopes

x='top'
def ta():
    x='ta'
    class A:
        global x
        def tata(self):
            return x # LOAD_GLOBAL
    return A

print ta()().tata() # -> 'top'

should not the global decl in class scope be ignored and so x be bound to x in
ta,
resulting in 'ta' as output? If one substitutes global x with x='A' that's what
happens.
Or only local binding in class scope should be ignored but global decl not?

regards, Samuele Pedroni




From tim.one at home.com  Sun Mar 11 06:16:38 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 11 Mar 2001 00:16:38 -0500
Subject: [Python-Dev] nested scopes and global: some corner cases
In-Reply-To: <005c01c0a9d0$99ff21e0$ae5821c0@newmexico>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>

[Samuele Pedroni]
> ...
> I have tried them with python 2.1b1 and I wonder if the results
> are consistent with the proposed rule:
> a free variable is bound according to the nearest outer scope binding
> (assign-like or global decl),
> class scopes (for backw-comp) are ignored wrt this.

"exec" and "import*" always complicate everything, though.

> (I)
> from __future__ import nested_scopes
>
> x='top'
> def ta():
>  global x
>  def tata():
>   exec "x=1" in locals()
>   return x # LOAD_NAME
>  return tata
>
> print ta()() prints 1, I believed it should print 'top' and a
> LOAD_GLOBAL should have been produced.

I doubt this will change.  In the presence of exec, the compiler has no idea
what's local anymore, so it deliberately generates LOAD_NAME.  When Guido
says he intends to "deprecate" exec-without-in, he should also always say
"and also deprecate exec in locals()/global() too".  But he'll have to think
about that and get back to you <wink>.

Note that modifications to locals() already have undefined behavior
(according to the Ref Man), so exec-in-locals() is undefined too if the
exec'ed code tries to (re)bind any names.

> In this case the global binding is somehow ignored. Note: putting
> a global decl in tata xor removing the exec make tata deliver 'top' as
> I expected (LOAD_GLOBALs are emitted).
> Is this a bug or I'm missing something?

It's an accident either way (IMO), so it's a bug either way too -- or a
feature either way.  It's basically senseless!  What you're missing is the
layers of hackery in support of exec even before 2.1; this "give up on static
identification of locals entirely in the presence of exec" goes back many
years.

> (II)
> from __future__ import nested_scopes

> x='top'
> def ta():
>     x='ta'
>     class A:
>         global x
>         def tata(self):
>             return x # LOAD_GLOBAL
>     return A
>
> print ta()().tata() # -> 'top'
>
> should not the global decl in class scope be ignored and so x be
> bound to x in ta, resulting in 'ta' as output?

Yes, this one is clearly a bug.  Good catch!




From moshez at zadka.site.co.il  Sun Mar 11 16:19:44 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sun, 11 Mar 2001 17:19:44 +0200 (IST)
Subject: [Python-Dev] Numeric PEPs
Message-ID: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>

Trying once again for the sought after position of "most PEPs on the
planet", here are 3 new PEPs as discussed on the DevDay. These PEPs
are in a large way, taking apart the existing PEP-0228, which served
its strawman (or pie-in-the-sky) purpose well.

Note that according to PEP 0001, the discussion now should be focused
on whether these should be official PEPs, not whether these are to
be accepted. If we decide that these PEPs are good enough to be PEPs
Barry should check them in, fix the internal references between them.
I would also appreciate setting a non-Yahoo list (either SF or python.org)
to discuss those issues -- I'd rather discussion will be there rather
then in my mailbox -- I had bad experience regarding that with PEP-0228.

(See Barry? "send a draft" isn't that scary. Bet you don't like me to
tell other people about it, huh?)

PEP: XXX
Title: Unifying Long Integers and Integers
Version: $Revision$
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Python has both integers, machine word size integral types, and long 
    integers, unbounded integral types. When integers operations overflow,
    the machine registers, they raise an error. This proposes to do away
    with the distinction, and unify the types from the prespective of both
    the Python interpreter, and the C API.

Rationale

    Having the machine word size leak to the language hinders portability
    (for examples, .pyc's are not portable because of that). Many programs
    find a need to deal with larger numbers after the fact, and changing the
    algorithms later is not only bothersome, but hinders performance on the
    normal case.

Literals

    A trailing 'L' at the end of an integer literal will stop having any
    meaning, and will be eventually phased out. This will be done using
    warnings when encountering such literals. The warning will be off by
    default in Python 2.2, on by default for two revisions, and then will
    no longer be supported.

Builtin Functions

    The function long will call the function int, issuing a warning. The
    warning will be off in 2.2, and on for two revisions before removing
    the function. A FAQ will be added that if there are old modules needing
    this then

         long=int

    At the top would solve this, or

         import __builtin__
         __builtin__.long=int

    In site.py.

C API

    All PyLong_AsX will call PyInt_AsX. If PyInt_AsX does not exist, it will
    be added. Similarly PyLong_FromX. A similar path of warnings as for the
    Python builtins followed.


Overflows

    When an arithmetic operation on two numbers whose internal representation 
    is as a machine-level integers returns something whose internal 
    representation is a bignum, a warning which is turned off by default will
    be issued. This is only a debugging aid, and has no guaranteed semantics.

Implementation

    The PyInt type's slot for a C long will be turned into a 

           union {
               long i;
               digit digits[1];
           };

    Only the n-1 lower bits of the long have any meaning, the top bit is always
    set. This distinguishes the union. All PyInt functions will check this bit
    before deciding which types of operations to use.

Jython Issues

    Jython will have a PyInt interface which is implemented by both from 
    PyFixNum and PyBigNum.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

==========================================
PEP: XXX
Title: Non-integer Division
Version: $Revision$
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Dividing integers returns the floor of the quantities. This behaviour
    is known as integer division, and is similar to what C and FORTRAN do.
    This has the useful property that all operations on integers return
    integers, but it does tend to put a hump in the learning curve when
    new programmers are surprised that

                  1/2 == 0

    This proposal shows a way to change this will keeping backward 
    compatability issues in mind.

Rationale

    The behaviour of integer division is a major stumbling block found in
    user testing of Python. This manages to trip up new programmers 
    regularily and even causes the experienced programmer to make the
    occasional bugs. The work arounds, like explicitly coerce one of the
    operands to float or use a non-integer literal, are very non-intuitive
    and lower the readability of the program.

// Operator

    A '//' operator which will be introduced, which will call the nb_intdivide
    or __intdiv__ slots. This operator will be implemented in all the Python
    numeric types, and will have the semantics of

                 a // b == floor(a/b)

    Except that the type of a//b will be the type a and b will be coerced
    into (specifically, if a and b are of the same type, a//b will be of that
    type too).

Changing the Semantics of the / Operator

    The nb_divide slot on integers (and long integers, if these are a seperate
    type) will issue a warning when given integers a and b such that

                  a % b != 0

    The warning will be off by default in the 2.2 release, and on by default
    for in the next Python release, and will stay in effect for 24 months.
    The next Python release after 24 months, it will implement

                  (a/b) * b = a (more or less)

    The type of a/b will be either a float or a rational, depending on other
    PEPs.

__future__

    A special opcode, FUTURE_DIV will be added that does the equivalent
    of

        if type(a) in (types.IntType, types.LongType):
             if type(b) in (types.IntType, types.LongType):
                 if a % b != 0:
                      return float(a)/b
        return a/b

    (or rational(a)/b, depending on whether 0.5 is rational or float)

    If "from __future__ import non_integer_division" is present in the
    releases until the IntType nb_divide is changed, the "/" operator is
    compiled to FUTURE_DIV

Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

====================================
PEP: XXX
Title: Adding a Rational Type to Python
Version: $Revision$
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Python has no number type whose semantics are that of a unboundedly
    precise rational number. This proposal explains the semantics of such
    a type, and suggests builtin functions and literals to support such
    a type. In addition, if division of integers would return a non-integer,
    it could also return a rational type.

Rationale

    While sometimes slower and more memory intensive (in general, unboundedly
    so) rational arithmetic captures more closely the mathematical ideal of
    numbers, and tends to have behaviour which is less surprising to newbies,

RationalType

    This will be a numeric type. The unary operators will do the obvious thing.
    Binary operators will coerce integers and long integers to rationals, and
    rationals to floats and complexes.

    The following attributes will be supported: .numerator, .denominator.
    The language definition will not define other then that

           r.denominator * r == r.numerator

    In particular, no guarantees are made regarding the GCD or the sign of
    the denominator, even though in the proposed implementation, the GCD is
    always 1 and the denominator is always positive.

    The method r.trim(max_denominator) will return the closest rational s to
    r such that abs(s.denominator) <= max_denominator.

The rational() Builtin

    This function will have the signature rational(n, d=1). n and d must both
    be integers, long integers or rationals. A guarantee is made that

            rational(n, d) * d == n

Literals

    Literals conforming to the RE '\d*.\d*' will be rational numbers.

Backwards Compatability

    The only backwards compatible issue is the type of literals mentioned
    above. The following migration is suggested:

    1. from __future__ import rational_literals will cause all such literals
       to be treated as rational numbers.
    2. Python 2.2 will have a warning, turned off by default, about such 
       literals in the absence of such an __future__. The warning message
       will contain information about the __future__ statement, and that
       to get floating point literals, they should be suffixed with "e0".
    3. Python 2.3 will have the warning turned on by default. This warning will
       stay in place for 24 months, at which time the literals will be rationals
       and the warning will be removed.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From pedroni at inf.ethz.ch  Sun Mar 11 17:17:38 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 11 Mar 2001 17:17:38 +0100
Subject: [Python-Dev] nested scopes and global: some corner cases
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>
Message-ID: <001b01c0aa46$d3dbbd80$f979fea9@newmexico>

Hi.

[Tim Peters on
from __future__ import nested_scopes

x='top'
def ta():
  global x
  def tata():
   exec "x=1" in locals()
   return x # LOAD_NAME vs LOAD_GLOBAL?
  return tata

 print ta()() # 1 vs. 'top' ?
]
-- snip --
> It's an accident either way (IMO), so it's a bug either way too -- or a
> feature either way.  It's basically senseless!  What you're missing is the
> layers of hackery in support of exec even before 2.1; this "give up on static
> identification of locals entirely in the presence of exec" goes back many
> years.
(Just a joke) I'm not such a "newbie" that the guess I'm missing something
is right with probability > .5. At least I hope so.
The same hackery is there in jython codebase
and I have taken much care in preserving it <wink>.

The point is simply that 'exec in locals()' is like a bare exec
but it has been decided to allow 'exec in' even in presence
of nested scopes and we cannot detect the 'locals()' special case
(at compile time) because in python 'locals' is the builtin only with
high probability.

So we face the problem, how to *implement* an undefined behaviour,
(the ref says that changing locals is undef,: everybody knows)
that historically has never been to seg fault, in the new (nested scopes)
context? It also true that what we are doing is "impossible", that's why
it has been decided to raise a SyntaxError in the bare exec case <wink>.

To be honest, I have just implemented things in jython my/some way, and then
discovered that jython CVS version and python 21.b1 (here) behave
differently. A posteriori I just tried to solve/explain things using
the old problem pattern: I give you a (number) sequence, guess the next
term:

the sequence is: (over this jython and python agree)

from __future__ import nested_scopes

def a():
 exec "x=1" in locals()
 return x # LOAD_NAME (jython does the equivalent)

def b():
  global x
  exec "x=1" in locals()
  return x # LOAD_GLOBAL

def c():
 global x
 def cc(): return x # LOAD_GLOBAL
 return cc

def d():
 x='d'
 def dd():
   exec "x=1" in locals() # without 'in locals()' => SynError
   return x # LOAD_DEREF (x in d)
 return dd

and then the term to guess:

def z():
 global x
 def zz():
  exec "x=1" in locals() # without 'in locals()' => SynError
  return x # ???? python guesses LOAD_NAME, jython the equiv of LOAD_GLOBAL
 return zz

Should python and jython agree here too? Anybody wants to spend some time
convincing me that I should change jython meaning of undefined?
I will not spend more time to do the converse <wink>.

regards, Samuele Pedroni.

PS: It is also possible that trying to solve pdb+nested scopes problem we will
have to consider the grab the locals problem with more care.




From paulp at ActiveState.com  Sun Mar 11 20:15:11 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 11:15:11 -0800
Subject: [Python-Dev] mail.python.org down?
Message-ID: <3AABCEBF.1FEC1F9D@ActiveState.com>

>>> urllib.urlopen("http://mail.python.org")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "c:\python20\lib\urllib.py", line 61, in urlopen
    return _urlopener.open(url)
  File "c:\python20\lib\urllib.py", line 166, in open
    return getattr(self, name)(url)
  File "c:\python20\lib\urllib.py", line 273, in open_http
    h.putrequest('GET', selector)
  File "c:\python20\lib\httplib.py", line 425, in putrequest
    self.send(str)
  File "c:\python20\lib\httplib.py", line 367, in send
    self.connect()
  File "c:\python20\lib\httplib.py", line 351, in connect
    self.sock.connect((self.host, self.port))
  File "<string>", line 1, in connect
IOError: [Errno socket error] (10061, 'Connection refused')

-- 
Python:
    Programming the way
    Guido
    indented it.



From tim.one at home.com  Sun Mar 11 20:14:28 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 11 Mar 2001 14:14:28 -0500
Subject: [Python-Dev] Forbidden names & obmalloc.c
Message-ID: <LNBBLJKPBEHFEDALKOLCOEOCJEAA.tim.one@home.com>

In std C, all identifiers that begin with an underscore and are followed by
an underscore or uppercase letter are reserved for the platform C
implementation.  obmalloc.c violates this rule all over the place, spilling
over into objimpl.h's use of _PyCore_ObjectMalloc. _PyCore_ObjectRealloc, and
_PyCore_ObjectFree.  The leading "_Py" there *probably* leaves them safe
despite being forbidden, but things like obmalloc.c's _SYSTEM_MALLOC and
_SET_HOOKS are going to bite us sooner or later (hard to say, but they may
have already, in bug #407680).

I renamed a few of the offending vrbl names, but I don't understand the
intent of the multiple layers of macros in this subsystem.  If anyone else
believes they do, please rename these suckers before the bad names get out
into the world and we have to break user code to repair eventual conflicts
with platforms' uses of these (reserved!) names.




From guido at digicool.com  Sun Mar 11 22:37:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 16:37:14 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: Your message of "Sun, 11 Mar 2001 00:16:38 EST."
             <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com> 
Message-ID: <200103112137.QAA13084@cj20424-a.reston1.va.home.com>

> When Guido
> says he intends to "deprecate" exec-without-in, he should also always say
> "and also deprecate exec in locals()/global() too".  But he'll have to think
> about that and get back to you <wink>.

Actually, I intend to deprecate locals().  For now, globals() are
fine.  I also intend to deprecate vars(), at least in the form that is
equivalent to locals().

> Note that modifications to locals() already have undefined behavior
> (according to the Ref Man), so exec-in-locals() is undefined too if the
> exec'ed code tries to (re)bind any names.

And that's the basis for deprecating it.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Sun Mar 11 23:28:29 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 17:28:29 -0500
Subject: [Python-Dev] mail.python.org down?
In-Reply-To: Your message of "Sun, 11 Mar 2001 11:15:11 PST."
             <3AABCEBF.1FEC1F9D@ActiveState.com> 
References: <3AABCEBF.1FEC1F9D@ActiveState.com> 
Message-ID: <200103112228.RAA13919@cj20424-a.reston1.va.home.com>

> >>> urllib.urlopen("http://mail.python.org")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
>   File "c:\python20\lib\urllib.py", line 61, in urlopen
>     return _urlopener.open(url)
>   File "c:\python20\lib\urllib.py", line 166, in open
>     return getattr(self, name)(url)
>   File "c:\python20\lib\urllib.py", line 273, in open_http
>     h.putrequest('GET', selector)
>   File "c:\python20\lib\httplib.py", line 425, in putrequest
>     self.send(str)
>   File "c:\python20\lib\httplib.py", line 367, in send
>     self.connect()
>   File "c:\python20\lib\httplib.py", line 351, in connect
>     self.sock.connect((self.host, self.port))
>   File "<string>", line 1, in connect
> IOError: [Errno socket error] (10061, 'Connection refused')

Beats me.  Indeed it is down.  I've notified the folks at DC
responsible for the site.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Mon Mar 12 00:15:38 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 15:15:38 -0800
Subject: [Python-Dev] mail.python.org down?
References: <3AABCEBF.1FEC1F9D@ActiveState.com> <200103112228.RAA13919@cj20424-a.reston1.va.home.com>
Message-ID: <3AAC071A.799A8B50@ActiveState.com>

Guido van Rossum wrote:
> 
>...
> 
> Beats me.  Indeed it is down.  I've notified the folks at DC
> responsible for the site.

It is fixed now. Thanks!

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From paulp at ActiveState.com  Mon Mar 12 00:23:07 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 15:23:07 -0800
Subject: [Python-Dev] Revive the types sig?
Message-ID: <3AAC08DB.9D4E96B4@ActiveState.com>

I have been involved with the types-sig for a long time and it has
consumed countless hours out of the lives of many brilliant people. I
strongly believe that it will only ever work if we change some of
fundamental assumptions, goals and procedures. At next year's
conference, I do not want to be at the same place in the discussion that
we were this year, and last year, and the year before. The last time I
thought I could make progress through sheer effort. All that did was
burn me out and stress out my wife. We've got to work smarter, not
harder.

The first thing we need to adjust is our terminology and goals. I think
that we should design a *parameter type annotation* system that will
lead directly to better error checking *at runtime*, better
documentation, better development environments an so forth. Checking
types *at compile time* should be considered a tools issue that can be
solved by separate tools. I'm not going to say that Python will NEVER
have a static type checking system but I would say that that shouldn't
be a primary goal.

I've reversed my opinion on this issue. Hey, even Guido makes mistakes.

I think that if the types-sig is going to come up with something
useful this time, we must observe a few principles that have proven
useful in developing Python:

1. Incremental development is okay. You do not need the end-goal in
mind before you begin work. Python today is very different than it was
when it was first developed (not as radically different than some
languages, but still different).

2. It is not necessary to get everything right. Python has some warts.
Some are easier to remove than others but they can all be removed
eventually. We have to get a type system done, test it out, and then
maybe we have to remove the warts. We may not design a perfect gem from
the start. Perfection is a goal, not a requirement.

3. Whatever feature you believe is absolutely necessary to a decent
type system probably is not. There are no right or wrong answers,
only features that work better or worse than other features.

It is important to understand that a dynamically-checked type
annotation system is just a replacement for assertions. Anything that
cannot be expressed in the type system CAN be expressed through
assertions.

For instance one person might claim that the type system needs to
differentiate between 32 bit integers and 64 bit integers. But if we
do not allow that differentiation directly in the type system, they
could do that in assertions. C'est la vie.

This is not unique to Python.  Languages like C++ and Java also have
type test and type assertion operators to "work around" the
limitations of their type systems. If people who have spent their
entire lives inventing static type checking systems cannot come up
with systems that are 100% "complete" then we in the Python world
should not even try. There is nothing wrong with using assertions for
advanced type checks. 

For instance, if you try to come up with a type system that can define
the type of "map" you will probably come up with something so
complicated that it will never be agreed upon or implemented.
(Python's map is much harder to type-declare than that of functional
languages because the function passed in must handle exactly as many
arguments as the unbounded number of sequences that are passed as
arguments to map.)

Even if we took an extreme position and ONLY allowed type annotations
for basic types like strings, numbers and sequences, Python would 
still be a better language. There are thousands of instances of these 
types in the standard library. If we can improve the error checking 
and documentation of these methods we have improved on the status 
quo. Adding type annotations for the other parameters could wait 
for another day.

----

In particular there are three features that have always exploded into
unending debates in the past. I claim that they should temporarily be
set aside while we work out the basics.

 A) Parameterized types (or templates): 

Parameterized types always cause the discussion to spin out of control
as we discuss levels and types of
parameterizability. A type system can be very useful with
parameterization. For instance, Python itself is written in C. C has no
parameterizability. Yet C is obviously still very useful (and simple!).
Java also does not yet have parameterized types and yet it is the most
rapidly growing statically typed programming language!

It is also important to note that parameterized types are much, much
more important in a language that "claims" to catch most or all type
errors at compile time. Python will probably never make that claim.
If you want to do a more sophisticated type check than Python allows,
you should do that in an assertion:

assert Btree.containsType(String)

Once the basic type system is in place, we can discuss the importance
of parameterized types separately later. Once we have attempted to use
Python without them, we will understand our needs better. The type
system should not prohibit the addition of parameterized types in the
future. 

A person could make a strong argument for allowing parameterization
only of basic types ("list of string", "tuple of integers") but I
think that we could even postpone this for the future.

 B) Static type checking: 

Static type warnings are important and we want to enable the development
of tools that will detect type errors before applications are shipped.
Nevertheless, we should not attempt to define a static type checking
system for Python at this point. That may happen in the future or never.

Unlike Java or C++, we should not require the Python interpreter
itself to ever reject code that "might be" type incorrect. Other tools
such as linters and IDEs should handle these forms of whole-program
type-checks.  Rather than defining the behavior of these tools in
advance, we should leave that as a quality of implementation issue for
now.

We might decide to add a formally-defined static type checking to
Python in the future. Dynamically checked annotations give us a
starting point. Once again, I think that the type system should be
defined so that annotations could be used as part of a static type
checking system in the future, should we decide that we want one.

 C) Attribute-value and variable declarations: 

In traditional static type checking systems, it is very important to
declare the type for attributes in a class and variables in a function. 

This feature is useful but it is fairly separable. I believe it should
wait because it brings up a bunch of issues such as read-only
attributes, cross-boundary assignment checks and so forth.

I propose that the first go-round of the types-sig should ONLY address
the issue of function signatures.

Let's discuss my proposal in the types-sig. Executive summary:

 * incremental development policy
 * syntax for parameter type declarations
 * syntax for return type declarations
 * optional runtime type checking
 * goals are better runtime error reporting and method documentation

Deferred for future versions (or never):

 * compile-time type checking
 * parameterized types
 * declarations for variables and attributes

http://www.python.org/sigs/types-sig/

-- 
Python:
    Programming the way
    Guido
    indented it.



From guido at digicool.com  Mon Mar 12 00:25:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:25:13 -0500
Subject: [Python-Dev] Unifying Long Integers and Integers
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
             <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>

(I'm splitting this in separate replies per PEP, to focus the
discussion a bit.)

> Trying once again for the sought after position of "most PEPs on the
> planet", here are 3 new PEPs as discussed on the DevDay. These PEPs
> are in a large way, taking apart the existing PEP-0228, which served
> its strawman (or pie-in-the-sky) purpose well.
> 
> Note that according to PEP 0001, the discussion now should be focused
> on whether these should be official PEPs, not whether these are to
> be accepted. If we decide that these PEPs are good enough to be PEPs
> Barry should check them in, fix the internal references between them.

Actually, since you have SF checkin permissions, Barry can just give
you a PEP number and you can check it in yourself!

> I would also appreciate setting a non-Yahoo list (either SF or
> python.org) to discuss those issues -- I'd rather discussion will be
> there rather then in my mailbox -- I had bad experience regarding
> that with PEP-0228.

Please help yourself.  I recommend using SF since it requires less
overhead for the poor python.org sysadmins.

> (See Barry? "send a draft" isn't that scary. Bet you don't like me
> to tell other people about it, huh?)

What was that about?

> PEP: XXX
> Title: Unifying Long Integers and Integers
> Version: $Revision$
> Author: pep at zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Python has both integers, machine word size integral types, and
>     long integers, unbounded integral types. When integers
>     operations overflow, the machine registers, they raise an
>     error. This proposes to do away with the distinction, and unify
>     the types from the prespective of both the Python interpreter,
>     and the C API.
> 
> Rationale
> 
>     Having the machine word size leak to the language hinders
>     portability (for examples, .pyc's are not portable because of
>     that). Many programs find a need to deal with larger numbers
>     after the fact, and changing the algorithms later is not only
>     bothersome, but hinders performance on the normal case.

I'm not sure if the portability of .pyc's is much worse than that of
.py files.  As long as you don't use plain ints >= 2**31 both are 100%
portable.  *programs* can of course become non-portable, but the true
reason for the change is simply that the distinction is arbitrary and
irrelevant.

> Literals
> 
>     A trailing 'L' at the end of an integer literal will stop having
>     any meaning, and will be eventually phased out. This will be
>     done using warnings when encountering such literals. The warning
>     will be off by default in Python 2.2, on by default for two
>     revisions, and then will no longer be supported.

Please suggested a more explicit schedule for introduction, with
approximate dates.  You can assume there will be roughly one 2.x
release every 6 months.

> Builtin Functions
> 
>     The function long will call the function int, issuing a
>     warning. The warning will be off in 2.2, and on for two
>     revisions before removing the function. A FAQ will be added that
>     if there are old modules needing this then
> 
>          long=int
> 
>     At the top would solve this, or
> 
>          import __builtin__
>          __builtin__.long=int
> 
>     In site.py.

There's more to it than that.  What about sys.maxint?  What should it
be set to?  We've got to pick *some* value because there's old code
that uses it.  (An additional problem here is that it's not easy to
issue warnings for using a particular constant.)

Other areas where we need to decide what to do: there are a few
operations that treat plain ints as unsigned: hex() and oct(), and the
format operators "%u", "%o" and "%x".  These have different semantics
for bignums!  (There they ignore the request for unsignedness and
return a signed representation anyway.)

There may be more -- the PEP should strive to eventually list all
issues, although of course it neededn't be complete at first checkin.

> C API
> 
>     All PyLong_AsX will call PyInt_AsX. If PyInt_AsX does not exist,
>     it will be added. Similarly PyLong_FromX. A similar path of
>     warnings as for the Python builtins followed.

May C APIs for other datatypes currently take int or long arguments,
e.g. list indexing and slicing.  I suppose these could stay the same,
or should we provide ways to use longer integers from C as well?

Also, what will you do about PyInt_AS_LONG()?  If PyInt_Check()
returns true for bignums, C code that uses PyInt_Check() and then
assumes that PyInt_AS_LONG() will return a valid outcome is in for a
big surprise!  I'm afraid that we will need to think through the
compatibility strategy for C code more.

> Overflows
> 
>     When an arithmetic operation on two numbers whose internal
>     representation is as a machine-level integers returns something
>     whose internal representation is a bignum, a warning which is
>     turned off by default will be issued. This is only a debugging
>     aid, and has no guaranteed semantics.

Note that the implementation suggested below implies that the overflow
boundary is at a different value than currently -- you take one bit
away from the long.  For backwards compatibility I think that may be
bad...

> Implementation
> 
>     The PyInt type's slot for a C long will be turned into a 
> 
>            union {
>                long i;
>                digit digits[1];
>            };

Almost.  The current bignum implementation actually has a length field
first.

I have an alternative implementation in mind where the type field is
actually different for machine ints and bignums.  Then the existing
int representation can stay, and we lose no bits.  This may have other
implications though, since uses of type(x) == type(1) will be broken.
Once the type/class unification is complete, this could be solved by
making long a subtype of int.

>     Only the n-1 lower bits of the long have any meaning, the top
>     bit is always set. This distinguishes the union. All PyInt
>     functions will check this bit before deciding which types of
>     operations to use.

See above. :-(

> Jython Issues
> 
>     Jython will have a PyInt interface which is implemented by both
>     from PyFixNum and PyBigNum.
> 
> 
> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

All in all, a good start, but needs some work, Moshe!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 00:37:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:37:37 -0500
Subject: [Python-Dev] Non-integer Division
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
             <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>

Good start, Moshe!  Some comments below.

> PEP: XXX
> Title: Non-integer Division
> Version: $Revision$
> Author: pep at zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Dividing integers returns the floor of the quantities. This
>     behaviour is known as integer division, and is similar to what C
>     and FORTRAN do.  This has the useful property that all
>     operations on integers return integers, but it does tend to put
>     a hump in the learning curve when new programmers are surprised
>     that
> 
>                   1/2 == 0
> 
>     This proposal shows a way to change this will keeping backward 
>     compatability issues in mind.
> 
> Rationale
> 
>     The behaviour of integer division is a major stumbling block
>     found in user testing of Python. This manages to trip up new
>     programmers regularily and even causes the experienced
>     programmer to make the occasional bugs. The work arounds, like
>     explicitly coerce one of the operands to float or use a
>     non-integer literal, are very non-intuitive and lower the
>     readability of the program.

There is a specific kind of example that shows why this is bad.
Python's polymorphism and treatment of mixed-mode arithmetic
(e.g. int+float => float) suggests that functions taking float
arguments and doing some math on them should also be callable with int
arguments.  But sometimes that doesn't work.  For example, in
electronics, Ohm's law suggests that current (I) equals voltage (U)
divided by resistance (R).  So here's a function to calculate the
current:

    >>> def I(U, R):
    ...     return U/R
    ...
    >>> print I(110, 100) # Current through a 100 Ohm resistor at 110 Volt
    1
    >>> 

This answer is wrong! It should be 1.1.  While there's a work-around
(return 1.0*U/R), it's ugly, and moreover because no exception is
raised, simple code testing may not reveal the bug.  I've seen this
reported many times.

> // Operator

Note: we could wind up using a different way to spell this operator,
e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
introduces a new reserved word, with all the issues it creates.  The
disadvantage of '//' is that it means something very different to Java
and C++ users.

>     A '//' operator which will be introduced, which will call the
>     nb_intdivide or __intdiv__ slots. This operator will be
>     implemented in all the Python numeric types, and will have the
>     semantics of
> 
>                  a // b == floor(a/b)
> 
>     Except that the type of a//b will be the type a and b will be
>     coerced into (specifically, if a and b are of the same type,
>     a//b will be of that type too).
> 
> Changing the Semantics of the / Operator
> 
>     The nb_divide slot on integers (and long integers, if these are
>     a seperate type) will issue a warning when given integers a and
>     b such that
> 
>                   a % b != 0
> 
>     The warning will be off by default in the 2.2 release, and on by
>     default for in the next Python release, and will stay in effect
>     for 24 months.  The next Python release after 24 months, it will
>     implement
> 
>                   (a/b) * b = a (more or less)
> 
>     The type of a/b will be either a float or a rational, depending
>     on other PEPs.
> 
> __future__
> 
>     A special opcode, FUTURE_DIV will be added that does the equivalent

Maybe for compatibility of bytecode files we should come up with a
better name, e.g. FLOAT_DIV?

>     of
> 
>         if type(a) in (types.IntType, types.LongType):
>              if type(b) in (types.IntType, types.LongType):
>                  if a % b != 0:
>                       return float(a)/b
>         return a/b
> 
>     (or rational(a)/b, depending on whether 0.5 is rational or float)
> 
>     If "from __future__ import non_integer_division" is present in the
>     releases until the IntType nb_divide is changed, the "/" operator is
>     compiled to FUTURE_DIV

I find "non_integer_division" rather long.  Maybe it should be called
"float_division"?

> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 00:55:03 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:55:03 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
             <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>

Here's the third installment -- my response to Moshe's rational
numbers PEP.

I believe that a fourth PEP should be written as well: decimal
floating point.  Maybe Tim can draft this?

> PEP: XXX
> Title: Adding a Rational Type to Python
> Version: $Revision$
> Author: pep at zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Python has no number type whose semantics are that of a
>     unboundedly precise rational number.

But one could easily be added to the standard library, and several
implementations exist, including one in the standard distribution:
Demo/classes/Rat.py.

>     This proposal explains the
>     semantics of such a type, and suggests builtin functions and
>     literals to support such a type. In addition, if division of
>     integers would return a non-integer, it could also return a
>     rational type.

It's kind of sneaky not to mention in the abstract that this should be
the default representation for numbers containing a decimal point,
replacing most use of floats!

> Rationale
> 
>     While sometimes slower and more memory intensive (in general,
>     unboundedly so) rational arithmetic captures more closely the
>     mathematical ideal of numbers, and tends to have behaviour which
>     is less surprising to newbies,

This PEP definitely needs a section of arguments Pro and Con.  For
Con, mention at least that rational arithmetic is much slower than
floating point, and can become *very* much slower when algorithms
aren't coded carefully.  Now, naively coded algorithms often don't
work well with floats either, but there is a lot of cultural knowledge
about defensive programming with floats, which is easily accessible to
newbies -- similar information about coding with rationals is much
less easily accessible, because no mainstream languages have used
rationals before.  (I suppose Common Lisp has rationals, since it has
everything, but I doubt that it uses them by default for numbers with
a decimal point.)

> RationalType
> 
>     This will be a numeric type. The unary operators will do the
>     obvious thing.  Binary operators will coerce integers and long
>     integers to rationals, and rationals to floats and complexes.
>
>     The following attributes will be supported: .numerator,
>     .denominator.  The language definition will not define other
>     then that
> 
>            r.denominator * r == r.numerator
> 
>     In particular, no guarantees are made regarding the GCD or the
>     sign of the denominator, even though in the proposed
>     implementation, the GCD is always 1 and the denominator is
>     always positive.
>
>     The method r.trim(max_denominator) will return the closest
>     rational s to r such that abs(s.denominator) <= max_denominator.
> 
> The rational() Builtin
> 
>     This function will have the signature rational(n, d=1). n and d
>     must both be integers, long integers or rationals. A guarantee
>     is made that
> 
>             rational(n, d) * d == n
> 
> Literals
> 
>     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> 
> Backwards Compatability
> 
>     The only backwards compatible issue is the type of literals
>     mentioned above. The following migration is suggested:
> 
>     1. from __future__ import rational_literals will cause all such
>        literals to be treated as rational numbers.
>     2. Python 2.2 will have a warning, turned off by default, about
>        such literals in the absence of such an __future__. The
>        warning message will contain information about the __future__
>        statement, and that to get floating point literals, they
>        should be suffixed with "e0".
>     3. Python 2.3 will have the warning turned on by default. This
>        warning will stay in place for 24 months, at which time the
>        literals will be rationals and the warning will be removed.

There are also backwards compatibility issues at the C level.

Question: the time module's time() function currently returns a
float.  Should it return a rational instead?  This is a trick question.

> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Mon Mar 12 01:25:23 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 02:25:23 +0200 (IST)
Subject: [Python-Dev] Re: Unifying Long Integers and Integers
In-Reply-To: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>
References: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido at digicool.com> wrote:

> Actually, since you have SF checkin permissions, Barry can just give
> you a PEP number and you can check it in yourself!

Technically yes. I'd rather Barry would change PEP-0000 himself ---
if he's ready to do that and let me check in the PEPs it's fine, but
I just imagined he'd like to keep the state consistent.

[re: numerical PEPs mailing list] 
> Please help yourself.  I recommend using SF since it requires less
> overhead for the poor python.org sysadmins.

Err...I can't. Requesting an SF mailing list is an admin operation.

[re: portablity of literals]
> I'm not sure if the portability of .pyc's is much worse than that of
> .py files.

Of course, .py's and .pyc's is just as portable. I do think that this
helps programs be more portable when they have literals inside them,
especially since (I believe) that soon the world will be a mixture of
32 bit and 64 bit machines.

> There's more to it than that.  What about sys.maxint?  What should it
> be set to?

I think I'd like to stuff this one "open issues" and ask people to 
grep through code searching for sys.maxint before I decide.

Grepping through the standard library shows that this is most often
use as a maximum size for sequences. So, I think it should be probably
the maximum size of an integer type large enough to hold a pointer.
(the only exception is mhlib.py, and it uses it when int(string) gives an
OverflowError, which it would stop so the code would be unreachable)

> Other areas where we need to decide what to do: there are a few
> operations that treat plain ints as unsigned: hex() and oct(), and the
> format operators "%u", "%o" and "%x".  These have different semantics
> for bignums!  (There they ignore the request for unsignedness and
> return a signed representation anyway.)

This would probably be solved by the fact that after the change 1<<31
will be positive. The real problem is that << stops having 32 bit semantics --
but it never really had those anyway, it had machine-long-size semantics,
which were unportable, so we can just people with unportable code to fix
it.

What do you think? Should I issue a warning on shifting an integer so
it would be cut/signed in the old semantics?

> May C APIs for other datatypes currently take int or long arguments,
> e.g. list indexing and slicing.  I suppose these could stay the same,
> or should we provide ways to use longer integers from C as well?

Hmmmm....I'd probably add PyInt_AS_LONG_LONG under an #ifdef HAVE_LONG_LONG

> Also, what will you do about PyInt_AS_LONG()?  If PyInt_Check()
> returns true for bignums, C code that uses PyInt_Check() and then
> assumes that PyInt_AS_LONG() will return a valid outcome is in for a
> big surprise!

Yes, that's a problem. I have no immediate solution to that -- I'll
add it to the list of open issues.

> Note that the implementation suggested below implies that the overflow
> boundary is at a different value than currently -- you take one bit
> away from the long.  For backwards compatibility I think that may be
> bad...

It also means overflow raises a different exception. Again, I suspect
it will be used only in cases where the algorithm is supposed to maintain
that internal results are not bigger then the inputs or things like that,
and there only as a debugging aid -- so I don't think that this would be this
bad. And if people want to avoid using the longs for performance reasons,
then the implementation should definitely *not* lie to them.

> Almost.  The current bignum implementation actually has a length field
> first.

My bad. ;-)

> I have an alternative implementation in mind where the type field is
> actually different for machine ints and bignums.  Then the existing
> int representation can stay, and we lose no bits.  This may have other
> implications though, since uses of type(x) == type(1) will be broken.
> Once the type/class unification is complete, this could be solved by
> making long a subtype of int.

OK, so what's the concrete advice? How about if I just said "integer operations
that previously raised OverflowError now return long integers, and literals
in programs that are too big to be integers are long integers?". I started
leaning this way when I started writing the PEP and decided that true 
unification may not be the low hanging fruit we always assumed it would be.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Mon Mar 12 01:36:58 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 02:36:58 +0200 (IST)
Subject: [Python-Dev] Re: Non-integer Division
In-Reply-To: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>
References: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312003658.01096AA27@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido at digicool.com> wrote:

> > // Operator
> 
> Note: we could wind up using a different way to spell this operator,
> e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
> introduces a new reserved word, with all the issues it creates.  The
> disadvantage of '//' is that it means something very different to Java
> and C++ users.

I have zero (0) intuition about what is better. You choose --- I have
no opinions on this. If we do go the "div" route, I need to also think
up a syntactic migration path once I figure out the parsing issues
involved. This isn't an argument -- just something you might want to 
consider before pronouncing on "div".

> Maybe for compatibility of bytecode files we should come up with a
> better name, e.g. FLOAT_DIV?

Hmmmm.....a bytecode files so far have failed to be compatible for
any revision. I have no problems with that, just that I feel that if
we're serious about comptability, we should say so, and if we're not,
then half-assed measures will not help.

[re: from __future__ import non_integer_division] 
> I find "non_integer_division" rather long.  Maybe it should be called
> "float_division"?

I have no problems with that -- except that if the rational PEP is accepted,
then this would rational_integer_division, and I didn't want to commit
myself yet.

You haven't commented yet about the rational PEP, so I don't know if that's
even an option.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Mon Mar 12 02:00:25 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 03:00:25 +0200 (IST)
Subject: [Python-Dev] Re: Adding a Rational Type to Python
In-Reply-To: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
References: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido at digicool.com> wrote:

> I believe that a fourth PEP should be written as well: decimal
> floating point.  Maybe Tim can draft this?

Better. I have very little decimal point experience, and in any way
I'd find it hard to write a PEP I don't believe it. However, I would
rather that it be written if only to be officially rejected, so if
no one volunteers to write it, I'm willing to do it anyway.
(Besides, I might manage to actually overtake Jeremy in number of PEPs
if I do this)

> It's kind of sneaky not to mention in the abstract that this should be
> the default representation for numbers containing a decimal point,
> replacing most use of floats!

I beg the mercy of the court. This was here, but got lost in the editing.
I've put it back.

> This PEP definitely needs a section of arguments Pro and Con.  For
> Con, mention at least that rational arithmetic is much slower than
> floating point, and can become *very* much slower when algorithms
> aren't coded carefully.

Note that I did try to help with coding carefully by adding the ".trim"
method.

> There are also backwards compatibility issues at the C level.

Hmmmmm....what are those? Very few c functions explicitly expect a
float, and the responsibility here can be pushed off to the Python
programmer by having to use explicit floats. For the others, PyArg_ParseTuple
can just coerce to float with the "d" type.

> Question: the time module's time() function currently returns a
> float.  Should it return a rational instead?  This is a trick question.

It should return the most exact number the underlying operating system
supports. For example, in OSes supporting gettimeofday, return a rational
built from tv_sec and tv_usec.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From jeremy at alum.mit.edu  Mon Mar 12 02:22:04 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sun, 11 Mar 2001 20:22:04 -0500 (EST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <3AAC08DB.9D4E96B4@ActiveState.com>
References: <3AAC08DB.9D4E96B4@ActiveState.com>
Message-ID: <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "PP" == Paul Prescod <paulp at ActiveState.com> writes:

  PP> Let's discuss my proposal in the types-sig. Executive summary:

  PP> * incremental development policy
  PP> * syntax for parameter type declarations
  PP> * syntax for return type declarations
  PP> * optional runtime type checking
  PP> * goals are better runtime error reporting and method
  PP>    documentation

If your goal is really the last one, then I don't see why we need the
first four <0.9 wink>.  Let's take this to the doc-sig.

I have never felt that Python's runtime error reporting is all that
bad.  Can you provide some more motivation for this concern?  Do you
have any examples of obscure errors that will be made clearer via type
declarations?

The best example I can think of for bad runtime error reporting is a
function that expects a sequence (perhaps of strings) and is passed a
string.  Since a string is a sequence, the argument is treated as a
sequence of length-1 strings.  I'm not sure how type declarations
help, because:

    (1) You would usually want to say that the function accepts a
        sequence -- and that doesn't get you any farther.

    (2) You would often want to say that the type of the elements of
        the sequence doesn't matter -- like len -- or that the type of
        the elements matters but the function is polymorphic -- like
        min.  In either case, you seem to be ruling out types for
        these very common sorts of functions.

If documentation is really the problem you want to solve, I imagine
we'd make much more progress if we could agree on a javadoc-style
format for documentation.  The ability to add return-type declarations
to functions and methods doesn't seem like much of a win.

Jeremy



From pedroni at inf.ethz.ch  Mon Mar 12 02:34:52 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 02:34:52 +0100
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>  <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <003f01c0aa94$a3be18c0$325821c0@newmexico>

Hi.

[GvR]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().
>
That's fine for me. Will that deprecation be already active with 2.1, e.g
having locals() and param-less vars() raise a warning.
I imagine a (new) function that produce a snap-shot of the values in the
local,free and
cell vars of a scope can do the job required for simple debugging (the copy
will not allow
to modify back the values), or another approach...

regards, Samuele Pedroni




From pedroni at inf.ethz.ch  Mon Mar 12 02:39:51 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 02:39:51 +0100
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>  <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <001c01c0aa95$55836f60$325821c0@newmexico>

Hi.

[GvR]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().
>
That's fine for me. Will that deprecation be already active with 2.1, e.g
having locals() and param-less vars() raise a warning.
I imagine a (new) function that produce a snap-shot of the values in the
local,free and cell vars of a scope can do the job required for simple 
debugging (the copy will not allow to modify back the values), 
or another approach...

In the meantime (if there's a meantime) is ok for jython to behave
the way I have explained or not? 
wrt to exec+locals()+global+nested scopes .

regards, Samuele Pedroni




From michel at digicool.com  Mon Mar 12 03:05:48 2001
From: michel at digicool.com (Michel Pelletier)
Date: Sun, 11 Mar 2001 18:05:48 -0800 (PST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <3AAC08DB.9D4E96B4@ActiveState.com>
Message-ID: <Pine.LNX.4.32.0103111745440.887-100000@localhost.localdomain>

On Sun, 11 Mar 2001, Paul Prescod wrote:

> Let's discuss my proposal in the types-sig. Executive summary:
>
>  * incremental development policy
>  * syntax for parameter type declarations
>  * syntax for return type declarations
>  * optional runtime type checking
>  * goals are better runtime error reporting and method documentation

I could be way over my head here, but I'll try to give you my ideas.

I've read the past proposals for type declarations and their
syntax, and I've also read a good bit of the types-sig archive.

I feel that there is not as much benefit to extending type declarations
into the language as their is to interfaces.  I feel this way because I'm
not sure what benefit this has over an object that describes the types you
are expecting and is associated with your object (like an interface).

The upshot of having an interface describe your expected parameter and
return types is that the type checking can be made as compile/run-time,
optional/madatory as you want without changing the language or your
implementation at all.  "Strong" checking could be done during testing,
and no checking at all during production, and any level in between.

A disadvantage of an interface is that it is a seperate, additional step
over just writing code (as are any type assertions in the language, but
those are "easier"  inline with the implementation).  But this
disadvantage is also an upshot when you immagine that the interface could
be developed later, and bolted onto the implementation later without
changing the implementation.

Also, type checking in general is good, but what about preconditions (this
parameter must be an int > 5 < 10) and postconditions and other conditions
one does now with assertions.  Would these be more language extensions in
your propsal?

As I see it, interfaces satify your first point, remove the need for your
second and third point, satify your fourth point, and meet the goals of
your fifth.

Nice to meet you at the conference,

-Michel





From greg at cosc.canterbury.ac.nz  Mon Mar 12 04:10:19 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 12 Mar 2001 16:10:19 +1300 (NZDT)
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: <003f01c0aa94$a3be18c0$325821c0@newmexico>
Message-ID: <200103120310.QAA04837@s454.cosc.canterbury.ac.nz>

Samuele Pedroni <pedroni at inf.ethz.ch>:

> I imagine a (new) function that produce a snap-shot of the values in
> the local,free and cell vars of a scope can do the job required for
> simple debugging (the copy will not allow to modify back the values)

Modifying the values doesn't cause any problem, only
adding new names to the scope. So locals() or whatever
replaces it could return a mapping object that doesn't 
allow adding any keys.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Mon Mar 12 04:25:56 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 11 Mar 2001 22:25:56 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPGJEAA.tim.one@home.com>

[Guido]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().

OK by me.  Note that we agreed long ago that if nested scopes ever made it
in, we would need to supply a way to get a "namespace mapping" object so that
stuff like:

    print "The value of i is %(i)s and j %(j)s" % locals()

could be replaced by:

    print "The value of i is %(i)s and j %(j)s" % namespace_map_object()

Also agreed this need not be a dict; fine by me if it's immutable too.




From ping at lfw.org  Mon Mar 12 06:01:49 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sun, 11 Mar 2001 21:01:49 -0800 (PST)
Subject: [Python-Dev] Re: Deprecating locals() (was Re: nested scopes and global: some
 corner cases)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEPGJEAA.tim.one@home.com>
Message-ID: <Pine.LNX.4.10.10103112056010.13108-100000@skuld.kingmanhall.org>

On Sun, 11 Mar 2001, Tim Peters wrote:
> OK by me.  Note that we agreed long ago that if nested scopes ever made it
> in, we would need to supply a way to get a "namespace mapping" object so that
> stuff like:
> 
>     print "The value of i is %(i)s and j %(j)s" % locals()
> 
> could be replaced by:
> 
>     print "The value of i is %(i)s and j %(j)s" % namespace_map_object()

I remarked to Jeremy at Python 9 that, given that we have new
variable lookup rules, there should be an API to perform this
lookup.  I suggested that a new method on frame objects would
be a good idea, and Jeremy & Barry seemed to agree.

I was originally thinking of frame.lookup('whatever'), but if
that method happens to be tp_getitem, then i suppose

    print "i is %(i)s and j is %(j)s" % sys.getframe()

would work.  We could call it something else, but one way or
another it's clear to me that this object has to follow lookup
rules that are completely consistent with whatever kind of
scoping is in effect (i.e. throw out *both* globals() and
locals() and provide one function that looks up the whole set
of visible names, rather than just one scope's contents).


-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From ping at lfw.org  Mon Mar 12 06:18:06 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sun, 11 Mar 2001 21:18:06 -0800 (PST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <Pine.LNX.4.32.0103111745440.887-100000@localhost.localdomain>
Message-ID: <Pine.LNX.4.10.10103112102030.13108-100000@skuld.kingmanhall.org>

On Sun, 11 Mar 2001, Michel Pelletier wrote:
> As I see it, interfaces satify your first point, remove the need for your
> second and third point, satify your fourth point, and meet the goals of
> your fifth.

For the record, here is a little idea i came up with on the
last day of the conference:

Suppose there is a built-in class called "Interface" with the
special property that whenever any immediate descendant of
Interface is sub-classed, we check to make sure all of its
methods are overridden.  If any methods are not overridden,
something like InterfaceException is raised.

This would be sufficient to provide very simple interfaces,
at least in terms of what methods are part of an interface
(it wouldn't do any type checking, but it could go a step
further and check the number of arguments on each method).

Example:

    >>> class Spam(Interface):
    ...     def islovely(self): pass
    ...
    >>> Spam()
    TypeError: interfaces cannot be instantiated
    >>> class Eggs(Spam):
    ...     def scramble(self): pass
    ...
    InterfaceError: class Eggs does not implement interface Spam
    >>> class LovelySpam(Spam):
    ...     def islovely(self): return 1
    ...
    >>> LovelySpam()
    <LovelySpam instance at ...>

Essentially this would replace the convention of writing a
whole bunch of methods that raise NotImplementedError as a
way of describing an abstract interface, making it a bit easier
to write and causing interfaces to be checked earlier (upon
subclassing, rather than upon method call).

It should be possible to implement this in Python using metaclasses.


-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From uche.ogbuji at fourthought.com  Mon Mar 12 08:11:27 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Mon, 12 Mar 2001 00:11:27 -0700
Subject: [Python-Dev] Revive the types sig? 
In-Reply-To: Message from Jeremy Hylton <jeremy@alum.mit.edu> 
   of "Sun, 11 Mar 2001 20:22:04 EST." <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103120711.AAA09711@localhost.localdomain>

Jeremy Hylton:

> If documentation is really the problem you want to solve, I imagine
> we'd make much more progress if we could agree on a javadoc-style
> format for documentation.  The ability to add return-type declarations
> to functions and methods doesn't seem like much of a win.

I know this isn't the types SIG and all, but since it has come up here, I'd 
like to (once again) express my violent disagreement with the efforts to add 
static typing to Python.  After this, I won't pursue the thread further here.

I used to agree with John Max Skaller that if any such beast were needed, it 
should be a more general system for asserting correctness, but I now realize 
that even that avenue might lead to madness.

Python provides more than enough power for any programmer to impose their own 
correctness tests, including those for type-safety.  Paul has pointed out to 
me that the goal of the types SIG is some mechanism that would not affect 
those of us who want nothing to do with static typing; but my fear is that 
once the decision is made to come up with something, such considerations might 
be the first out the window.  Indeed, the last round of talks produced some 
very outre proposals.

Type errors are not even close to the majority of those I make while 
programming in Python, and I'm quite certain that the code I've written in 
Python is much less buggy than code I've written in strongly-typed languages.  
Expressiveness, IMO, is a far better aid to correctness than artificial 
restrictions (see Java for the example of school-marm programming gone amok).

If I understand Jeremy correctly, I am in strong agreement that it is at least 
worth trying the structured documentation approach to signalling pre- and 
post-conditions before turning Python into a rather different language.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From tim.one at home.com  Mon Mar 12 08:30:03 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 02:30:03 -0500
Subject: [Python-Dev] RE: Revive the types sig?
In-Reply-To: <200103120711.AAA09711@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEACJFAA.tim.one@home.com>

Could we please prune followups on this to the Types-SIG now?  I don't really
need to see three copies of every msg, and everyone who has the slightest
interest in the topic should already be on the Types-SIG.

grumpily y'rs  - tim




From mwh21 at cam.ac.uk  Mon Mar 12 09:24:03 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 08:24:03 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Guido van Rossum's message of "Sun, 11 Mar 2001 18:55:03 -0500"
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
Message-ID: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> Here's the third installment -- my response to Moshe's rational
> numbers PEP.

I'm replying to Guido mainly through laziness.

> > PEP: XXX
> > Title: Adding a Rational Type to Python
> > Version: $Revision$
> > Author: pep at zadka.site.co.il (Moshe Zadka)
> > Status: Draft
> > Python-Version: 2.2
> > Type: Standards Track
> > Created: 11-Mar-2001
> > Post-History:
> > 
> > 
> > Abstract
> > 
> >     Python has no number type whose semantics are that of a
> >     unboundedly precise rational number.
> 
> But one could easily be added to the standard library, and several
> implementations exist, including one in the standard distribution:
> Demo/classes/Rat.py.
> 
> >     This proposal explains the
> >     semantics of such a type, and suggests builtin functions and
> >     literals to support such a type. In addition, if division of
> >     integers would return a non-integer, it could also return a
> >     rational type.
> 
> It's kind of sneaky not to mention in the abstract that this should be
> the default representation for numbers containing a decimal point,
> replacing most use of floats!

If "/" on integers returns a rational (as I presume it will if
rationals get in as it's the only sane return type), then can we
please have the default way of writing rationals as "p/q"?  OK, so it
might be inefficient (a la complex numbers), but it should be trivial
to optimize if required.

Having ddd.ddd be a rational bothers me.  *No* langauge does that at
present, do they?  Also, writing rational numbers as decimal floats
strikes me s a bit loopy.  Is 

  0.33333333

1/3 or 3333333/10000000?

Certainly, if it's to go in, I'd like to see

> > Literals
> > 
> >     Literals conforming to the RE '\d*.\d*' will be rational numbers.

in the PEP as justification.

Cheers,
M.

-- 
  MAN:  How can I tell that the past isn't a fiction designed to
        account for the discrepancy between my immediate physical
        sensations and my state of mind?
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 12




From tim.one at home.com  Mon Mar 12 09:52:49 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 03:52:49 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com>

[Michael Hudson]
> ...
> Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> present, do they?

ABC (Python's closest predecessor) did.  6.02e23 and 1.073242e-301 were also
exact rationals.  *All* numeric literals were.  This explains why they aren't
in Python, but doesn't explain exactly why:  i.e., it didn't work well in
ABC, but it's unclear whether that's because rationals suck, or because you
got rationals even when 10,000 years of computer history <wink> told you that
"." would get you something else.

> Also, writing rational numbers as decimal floats strikes me as a
> bit loopy.  Is
>
>   0.33333333
>
> 1/3 or 3333333/10000000?

Neither, it's 33333333/100000000 (which is what I expect you intended for
your 2nd choice).  Else

    0.33333333 == 33333333/100000000

would be false, and

    0.33333333 * 3 == 1

would be true, and those are absurd if both sides are taken as rational
notations.  OTOH, it's possible to do rational<->string conversion with an
extended notation for "repeating decimals", e.g.

   str(1/3) == "0.(3)"
   eval("0.(3)") == 1/3

would be possible (indeed, I've implemented it in my own rational classes,
but not by default since identifying "the repeating part" in rat->string can
take space proportional to the magnitude of the denominator).

but-"."-is-mnemonic-for-the-"point"-in-"floating-point"-ly y'rs  - tim




From moshez at zadka.site.co.il  Mon Mar 12 12:51:36 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 13:51:36 +0200 (IST)
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
Message-ID: <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>

On 12 Mar 2001 08:24:03 +0000, Michael Hudson <mwh21 at cam.ac.uk> wrote:
 
> If "/" on integers returns a rational (as I presume it will if
> rationals get in as it's the only sane return type), then can we
> please have the default way of writing rationals as "p/q"?

That's proposed in a different PEP. Personally (*shock*) I'd like
all my PEPs to go in, but we sort of agreed that they will only
get in if they can get in in seperate pieces.
  
> Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> present, do they?  Also, writing rational numbers as decimal floats
> strikes me s a bit loopy.  Is 
> 
>   0.33333333
> 
> 1/3 or 3333333/10000000?

The later. But decimal numbers *are* rationals...just the denominator
is always a power of 10.

> Certainly, if it's to go in, I'd like to see
> 
> > > Literals
> > > 
> > >     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> 
> in the PEP as justification.
 
I'm not understanding you. Do you think it needs more justification, or
that it is justification for something?
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From mwh21 at cam.ac.uk  Mon Mar 12 13:03:17 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 12:03:17 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: "Tim Peters"'s message of "Mon, 12 Mar 2001 03:52:49 -0500"
References: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com>
Message-ID: <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> [Michael Hudson]
> > ...
> > Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> > present, do they?
> 
> ABC (Python's closest predecessor) did.  6.02e23 and 1.073242e-301
> were also exact rationals.  *All* numeric literals were.  This
> explains why they aren't in Python, but doesn't explain exactly why:
> i.e., it didn't work well in ABC, but it's unclear whether that's
> because rationals suck, or because you got rationals even when
> 10,000 years of computer history <wink> told you that "." would get
> you something else.

Well, it seems likely that it wouldn't work in Python too, doesn't it?
Especially with 10010 years of computer history.

> > Also, writing rational numbers as decimal floats strikes me as a
> > bit loopy.  Is
> >
> >   0.33333333
> >
> > 1/3 or 3333333/10000000?
> 
> Neither, it's 33333333/100000000 (which is what I expect you intended for
> your 2nd choice).

Err, yes.  I was feeling too lazy to count 0's.

[snip]
> OTOH, it's possible to do rational<->string conversion with an
> extended notation for "repeating decimals", e.g.
> 
>    str(1/3) == "0.(3)"
>    eval("0.(3)") == 1/3
> 
> would be possible (indeed, I've implemented it in my own rational
> classes, but not by default since identifying "the repeating part"
> in rat->string can take space proportional to the magnitude of the
> denominator).

Hmm, I wonder what the repr of rational(1,3) is...

> but-"."-is-mnemonic-for-the-"point"-in-"floating-point"-ly y'rs  - tim

Quite.

Cheers,
M.

-- 
  Slim Shady is fed up with your shit, and he's going to kill you.
                         -- Eminem, "Public Service Announcement 2000"




From mwh21 at cam.ac.uk  Mon Mar 12 13:07:19 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 12:07:19 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Moshe Zadka's message of "Mon, 12 Mar 2001 13:51:36 +0200 (IST)"
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk> <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <m3wv9v6vig.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez at zadka.site.co.il> writes:

> On 12 Mar 2001 08:24:03 +0000, Michael Hudson <mwh21 at cam.ac.uk> wrote:
>  
> > If "/" on integers returns a rational (as I presume it will if
> > rationals get in as it's the only sane return type), then can we
> > please have the default way of writing rationals as "p/q"?
> 
> That's proposed in a different PEP. Personally (*shock*) I'd like
> all my PEPs to go in, but we sort of agreed that they will only
> get in if they can get in in seperate pieces.

Fair enough.

> > Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> > present, do they?  Also, writing rational numbers as decimal floats
> > strikes me s a bit loopy.  Is 
> > 
> >   0.33333333
> > 
> > 1/3 or 3333333/10000000?
> 
> The later. But decimal numbers *are* rationals...just the denominator
> is always a power of 10.

Well, floating point numbers are rationals too, only the denominator
is always a power of 2 (or sixteen, if you're really lucky).

I suppose I don't have any rational (groan) objections, but it just
strikes me instinctively as a Bad Idea.

> > Certainly, if it's to go in, I'd like to see
                                                 ^
                                             "more than"
sorry.

> > > > Literals
> > > > 
> > > >     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> > 
> > in the PEP as justification.
>  
> I'm not understanding you. Do you think it needs more justification,
> or that it is justification for something?

I think it needs more justification.

Well, actually I think it should be dropped, but if that's not going
to happen, then it needs more justification.

Cheers,
M.

-- 
  To summarise the summary of the summary:- people are a problem.
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 12




From paulp at ActiveState.com  Mon Mar 12 13:27:29 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 04:27:29 -0800
Subject: [Python-Dev] Adding a Rational Type to Python
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <3AACC0B1.4AD48247@ActiveState.com>

Whether or not Python adopts rationals as the default number type, a
rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
2.2.

I think that Python users should be allowed to experiment with it before
it becomes the default. If I recode my existing programs to use
rationals and they experience an exponential slow-down, that might
influence my recommendation to Guido. 
-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From thomas at xs4all.net  Mon Mar 12 14:16:00 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 14:16:00 +0100
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>; from mwh21@cam.ac.uk on Mon, Mar 12, 2001 at 12:03:17PM +0000
References: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com> <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <20010312141600.Q404@xs4all.nl>

On Mon, Mar 12, 2001 at 12:03:17PM +0000, Michael Hudson wrote:

> Hmm, I wonder what the repr of rational(1,3) is...

Well, 'rational(1,3)', of course. Unless 1/3 returns a rational, in which
case it can just return '1/3' :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Mon Mar 12 14:51:22 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 08:51:22 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:39:51 +0100."
             <001c01c0aa95$55836f60$325821c0@newmexico> 
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com> <200103112137.QAA13084@cj20424-a.reston1.va.home.com>  
            <001c01c0aa95$55836f60$325821c0@newmexico> 
Message-ID: <200103121351.IAA18642@cj20424-a.reston1.va.home.com>

> [GvR]
> > Actually, I intend to deprecate locals().  For now, globals() are
> > fine.  I also intend to deprecate vars(), at least in the form that is
> > equivalent to locals().

[Samuele]
> That's fine for me. Will that deprecation be already active with 2.1, e.g
> having locals() and param-less vars() raise a warning.

Hm, I hadn't thought of doing it right now.

> I imagine a (new) function that produce a snap-shot of the values in the
> local,free and cell vars of a scope can do the job required for simple 
> debugging (the copy will not allow to modify back the values), 
> or another approach...

Maybe.  I see two solutions: a function that returns a copy, or a
function that returns a "lazy mapping".  The former could be done as
follows given two scopes:

def namespace():
    d = __builtin__.__dict__.copy()
    d.update(globals())
    d.update(locals())
    return d

The latter like this:

def namespace():
    class C:
        def __init__(self, g, l):
            self.__g = g
            self.__l = l
        def __getitem__(self, key):
            try:
                return self.__l[key]
            except KeyError:
                try:
                    return self.__g[key]
                except KeyError:
                    return __builtin__.__dict__[key]
    return C(globals(), locals())

But of course they would have to work harder to deal with nested
scopes and cells etc.

I'm not sure if we should add this to 2.1 (if only because it's more
work than I'd like to put in this late in the game) and then I'm not
sure if we should deprecate locals() yet.

> In the meantime (if there's a meantime) is ok for jython to behave
> the way I have explained or not? 
> wrt to exec+locals()+global+nested scopes .

Sure.  You may even document it as one of the known differences.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 15:50:44 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:50:44 -0500
Subject: [Python-Dev] Re: Unifying Long Integers and Integers
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:25:23 +0200."
             <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il> 
References: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>  
            <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103121450.JAA19125@cj20424-a.reston1.va.home.com>

> [re: numerical PEPs mailing list] 
> > Please help yourself.  I recommend using SF since it requires less
> > overhead for the poor python.org sysadmins.
> 
> Err...I can't. Requesting an SF mailing list is an admin operation.

OK.  I won't make the request (too much going on still) so please ask
someone else at PythonLabs to do it.  Don't just sit there waiting for
one of us to read this mail and do it!

> What do you think? Should I issue a warning on shifting an integer so
> it would be cut/signed in the old semantics?

You'll have to, because the change in semantics will definitely break
some code.

> It also means overflow raises a different exception. Again, I suspect
> it will be used only in cases where the algorithm is supposed to maintain
> that internal results are not bigger then the inputs or things like that,
> and there only as a debugging aid -- so I don't think that this would be this
> bad. And if people want to avoid using the longs for performance reasons,
> then the implementation should definitely *not* lie to them.

It's not clear that using something derived from the machine word size
is the most helpful here.  Maybe a separate integral type that has a
limited range should be used for this.

> OK, so what's the concrete advice?

Propose both alternatives in the PEP.  It's too early to make
decisions -- first we need to have a catalog of our options, and their
consequences.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 15:52:20 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:52:20 -0500
Subject: [Python-Dev] Re: Non-integer Division
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:36:58 +0200."
             <20010312003658.01096AA27@darjeeling.zadka.site.co.il> 
References: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>  
            <20010312003658.01096AA27@darjeeling.zadka.site.co.il> 
Message-ID: <200103121452.JAA19139@cj20424-a.reston1.va.home.com>

> > > // Operator
> > 
> > Note: we could wind up using a different way to spell this operator,
> > e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
> > introduces a new reserved word, with all the issues it creates.  The
> > disadvantage of '//' is that it means something very different to Java
> > and C++ users.
> 
> I have zero (0) intuition about what is better. You choose --- I have
> no opinions on this. If we do go the "div" route, I need to also think
> up a syntactic migration path once I figure out the parsing issues
> involved. This isn't an argument -- just something you might want to 
> consider before pronouncing on "div".

As I said in the other thread, it's too early to make the decision --
just present both options in the PEP, and arguments pro/con for each.

> > Maybe for compatibility of bytecode files we should come up with a
> > better name, e.g. FLOAT_DIV?
> 
> Hmmmm.....a bytecode files so far have failed to be compatible for
> any revision. I have no problems with that, just that I feel that if
> we're serious about comptability, we should say so, and if we're not,
> then half-assed measures will not help.

Fair enough.

> [re: from __future__ import non_integer_division] 
> > I find "non_integer_division" rather long.  Maybe it should be called
> > "float_division"?
> 
> I have no problems with that -- except that if the rational PEP is accepted,
> then this would rational_integer_division, and I didn't want to commit
> myself yet.

Understood.

> You haven't commented yet about the rational PEP, so I don't know if that's
> even an option.

Yes I have, but in summary, I still think rationals are a bad idea.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Mon Mar 12 15:55:31 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 16:55:31 +0200 (IST)
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <3AACC0B1.4AD48247@ActiveState.com>
References: <3AACC0B1.4AD48247@ActiveState.com>, <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <20010312145531.649E1AA27@darjeeling.zadka.site.co.il>

On Mon, 12 Mar 2001, Paul Prescod <paulp at ActiveState.com> wrote:

> Whether or not Python adopts rationals as the default number type, a
> rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> 2.2.

OK, how about this:

1. I remove the "literals" part from my PEP to another PEP
2. I add to rational() an ability to take strings, such as "1.3" and 
   make rationals out of them

Does anyone have any objections to

a. doing that
b. the PEP that would result from 1+2
?

I even volunteer to code the first prototype.
 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Mon Mar 12 15:57:31 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:57:31 -0500
Subject: [Python-Dev] Re: Adding a Rational Type to Python
In-Reply-To: Your message of "Mon, 12 Mar 2001 03:00:25 +0200."
             <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il> 
References: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>  
            <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il> 
Message-ID: <200103121457.JAA19188@cj20424-a.reston1.va.home.com>

> > Question: the time module's time() function currently returns a
> > float.  Should it return a rational instead?  This is a trick question.
> 
> It should return the most exact number the underlying operating system
> supports. For example, in OSes supporting gettimeofday, return a rational
> built from tv_sec and tv_usec.

I told you it was a trick question. :-)

Time may be *reported* in microseconds, but it's rarely *accurate* to
microseconds.  Because the precision is unclear, I think a float is
more appropriate here.

--Guido van Rossum (home page: http://www.python.org/~guido/)




From paulp at ActiveState.com  Mon Mar 12 16:09:37 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 07:09:37 -0800
Subject: [Python-Dev] Adding a Rational Type to Python
References: <3AACC0B1.4AD48247@ActiveState.com>, <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il> <20010312145531.649E1AA27@darjeeling.zadka.site.co.il>
Message-ID: <3AACE6B1.A599279D@ActiveState.com>

Moshe Zadka wrote:
> 
> On Mon, 12 Mar 2001, Paul Prescod <paulp at ActiveState.com> wrote:
> 
> > Whether or not Python adopts rationals as the default number type, a
> > rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> > 2.2.
> 
> OK, how about this:
> 
> 1. I remove the "literals" part from my PEP to another PEP
> 2. I add to rational() an ability to take strings, such as "1.3" and
>    make rationals out of them

+1

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From guido at digicool.com  Mon Mar 12 16:09:15 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 10:09:15 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Your message of "Mon, 12 Mar 2001 04:27:29 PST."
             <3AACC0B1.4AD48247@ActiveState.com> 
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>  
            <3AACC0B1.4AD48247@ActiveState.com> 
Message-ID: <200103121509.KAA19299@cj20424-a.reston1.va.home.com>

> Whether or not Python adopts rationals as the default number type, a
> rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> 2.2.
> 
> I think that Python users should be allowed to experiment with it before
> it becomes the default. If I recode my existing programs to use
> rationals and they experience an exponential slow-down, that might
> influence my recommendation to Guido. 

Excellent idea.  Moshe is already biting:

[Moshe]
> On Mon, 12 Mar 2001, Paul Prescod <paulp at ActiveState.com> wrote:
> 
> > Whether or not Python adopts rationals as the default number type, a
> > rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> > 2.2.
> 
> OK, how about this:
> 
> 1. I remove the "literals" part from my PEP to another PEP
> 2. I add to rational() an ability to take strings, such as "1.3" and 
>    make rationals out of them
> 
> Does anyone have any objections to
> 
> a. doing that
> b. the PEP that would result from 1+2
> ?
> 
> I even volunteer to code the first prototype.

I think that would make it a better PEP, and I recommend doing this,
because nothing can be so convincing as a working prototype!

Even so, I'm not sure that rational() should be added to the standard
set of built-in functions, but I'm much less opposed this than I am
against making 0.5 or 1/2 return a rational.  After all we have
complex(), so there's certainly a case to be made for rational().

Note: if you call it fraction() instead, it may appeal more to the
educational crowd!  (In grade school, we learn fractions; not until
late in high school do we learn that mathematicials call fractions
rationals.  It's the same as Randy Paush's argument about what to call
a quarter turn: not 90 degrees, not pi/2, just call it 1/4 turn. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Mon Mar 12 16:55:12 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 16:55:12 +0100
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <200103121509.KAA19299@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 10:09:15AM -0500
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il> <3AACC0B1.4AD48247@ActiveState.com> <200103121509.KAA19299@cj20424-a.reston1.va.home.com>
Message-ID: <20010312165512.S404@xs4all.nl>

On Mon, Mar 12, 2001 at 10:09:15AM -0500, Guido van Rossum wrote:

> Note: if you call it fraction() instead, it may appeal more to the
> educational crowd!  (In grade school, we learn fractions; not until
> late in high school do we learn that mathematicials call fractions
> rationals.  It's the same as Randy Paush's argument about what to call
> a quarter turn: not 90 degrees, not pi/2, just call it 1/4 turn. :-)

+1 on fraction(). +0 on making it a builtin instead of a separate module.
(I'm not nearly as worried about adding builtins as I am with adding
keywords <wink>)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From pedroni at inf.ethz.ch  Mon Mar 12 17:47:22 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 17:47:22 +0100 (MET)
Subject: [Python-Dev] about sparse inputs from the jython userbase & types, language extensions
Message-ID: <200103121647.RAA15331@core.inf.ethz.ch>

Hi.

What follows is maybe to abstract or naive to be useful, if reading this is 
waste of time: sorry.
Further I ignore the content of the P3K kick-start session...

"We" are planning to add many features to python. It has also
been explicitly written that this is for the developers to have fun too ;).

Exact arithmetic, behind the scene promotion on overflow, etc...
nested scopes, iterators

A bit joking: lim(t->oo) python ~ Common Lisp

Ok, in python programs and data are not that much the same,
we don't have CL macros (but AFAIK dylan is an example of a language
without data&programs having the same structure but with CL-like macros , so 
maybe...), and "we" are not as masochistic as a commitee can be, and we
have not the all the history that CL should carry.

Python does not have (by now) optional static typing (CL has such a beast, 
everybody knows), but this is always haunting around, mainly for documentation
and error checking purposes.

Many of the proposals also go in the direction of making life easier
for newbies, even for programming newbies...
(this is not a paradox, a regular and well chosen subset of CL can
be appopriate for them and the world knows a beast called scheme).

Joke: making newbie happy is dangerous, then they will never want
to learn C ;)

The point: what is some (sparse) part of jython user base asking for?

1. better java intergration (for sure).
2. p-e-r-f-o-r-m-a-n-c-e

They ask why is jython so slow, why it does not exploit unboxed int or float
(the more informed one),
whether it is not possible to translate jython to java achieving performance...

The python answer about performance is:
- Think, you don't really need it,
- find the hotspot and code it in C,
- programmer speed is more important than pure program speed,
- python is just a glue language
Jython one is not that different.

If someone comes from C or much java this is fair.
For the happy newbie that's deceiving. (And can become
frustrating even for the experienced open-source programmer
 that wants to do more in less time: be able to do as much things
 as possible in python would be nice <wink>).

If python importance will increase IMHO this will become a real issue
(also from java, people is always asking for more performance).

Let some software house give them the right amount of perfomance  and dynamism
out of python for $xK (that what happens nowaday with CL), even more deceiving.

(I'm aware that dealing with that, also from a purely code complexity viewpoint,
may be too much for an open project in term of motivation)

regards, Samuele Pedroni.

PS: I'm aware of enough theoretical approaches to performance to know
that optional typing is just one of the possible, the point is that
performance as an issue should not be underestimated.




From pedroni at inf.ethz.ch  Mon Mar 12 21:23:25 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 21:23:25 +0100 (MET)
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
Message-ID: <200103122023.VAA20984@core.inf.ethz.ch>

Hi.

[GvR]
> > I imagine a (new) function that produce a snap-shot of the values in the
> > local,free and cell vars of a scope can do the job required for simple 
> > debugging (the copy will not allow to modify back the values), 
> > or another approach...
> 
> Maybe.  I see two solutions: a function that returns a copy, or a
> function that returns a "lazy mapping".  The former could be done as
> follows given two scopes:
> 
> def namespace():
>     d = __builtin__.__dict__.copy()
>     d.update(globals())
>     d.update(locals())
>     return d
> 
> The latter like this:
> 
> def namespace():
>     class C:
>         def __init__(self, g, l):
>             self.__g = g
>             self.__l = l
>         def __getitem__(self, key):
>             try:
>                 return self.__l[key]
>             except KeyError:
>                 try:
>                     return self.__g[key]
>                 except KeyError:
>                     return __builtin__.__dict__[key]
>     return C(globals(), locals())
> 
> But of course they would have to work harder to deal with nested
> scopes and cells etc.
> 
> I'm not sure if we should add this to 2.1 (if only because it's more
> work than I'd like to put in this late in the game) and then I'm not
> sure if we should deprecate locals() yet.
But in any case we would need something similar to repair pdb,
this independently of locals deprecation...

Samuele.




From thomas at xs4all.net  Mon Mar 12 22:04:31 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 22:04:31 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
Message-ID: <20010312220425.T404@xs4all.nl>

Contrary to Guido's keynote last week <wink> there are still two warts I
know of in the current CPython. One is the fact that keywords cannot be used
as identifiers anywhere, the other is the fact that 'continue' can still not
be used inside a 'finally' clause. If I remember correctly, the latter isn't
too hard to fix, it just needs a decision on what it should do :)

Currently, falling out of a 'finally' block will reraise the exception, if
any. Using 'return' and 'break' will drop the exception and continue on as
usual. However, that makes sense (imho) mostly because 'break' will continue
past the try/finally block and 'return' will break out of the function
altogether. Neither have a chance of reentering the try/finally block
altogether. I'm not sure if that would make sense for 'continue' inside
'finally'.

On the other hand, I'm not sure if it makes sense for 'break' to continue
but for 'continue' to break. :)

As for the other wart, I still want to fix it, but I'm not sure when I get
the chance to grok the parser-generator enough to actually do it :) 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From msw at redhat.com  Mon Mar 12 22:47:05 2001
From: msw at redhat.com (Matt Wilson)
Date: Mon, 12 Mar 2001 16:47:05 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
Message-ID: <20010312164705.C641@devserv.devel.redhat.com>

We've been auditing various code lately to check for /tmp races and so
on.  It seems that tempfile.mktemp() is used throughout the Python
library.  While nice and portable, tempfile.mktemp() is vulnerable to
races.

The TemporaryFile does a nice job of handling the filename returned by
mktemp properly, but there are many modules that don't.

Should I attempt to patch them all to use TemporaryFile?  Or set up
conditional use of mkstemp on those systems that support it?

Cheers,

Matt
msw at redhat.com



From DavidA at ActiveState.com  Mon Mar 12 23:01:02 2001
From: DavidA at ActiveState.com (David Ascher)
Date: Mon, 12 Mar 2001 14:01:02 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
Message-ID: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com>

With apologies for the delay, here are my notes from the numeric coercion
day.

There were many topics which were defined by the Timbot to be within the
scope of the discussion.  Those included:

  - Whether numbers should be rationals / binary FP / decimal FP / etc.
  - Whether there should be support for both exact and inexact computations
  - What division means.

There were few "deliverables" at the end of the day, mostly a lot of
consternation on all sides of the multi-faceted divide, with the impression
in at least this observer's mind that there are few things more
controversial than what numbers are for and how they should work.  A few
things emerged, however:

  0) There is tension between making math in Python 'understandable' to a
high-school kid and making math in Python 'useful' to an engineer/scientist.

  1) We could consider using the new warnings framework for noting things
which are "dangerous" to do with numbers, such as:

       - noting that an operation on 'plain' ints resulted in a 'long'
result.
       - using == when comparing floating point numbers

  2) The Fortran notion of "Kind" as an orthogonal notion to "Type" may make
sense (details to be fleshed out).

  3) Pythonistas are good at quotes:

     "You cannot stop people from complaining, but you can influence what
they
      complain about." - Tim Peters

     "The only problem with using rationals for money is that money, is,
      well, not rational." - Moshe Zadka

     "Don't get too apoplectic about this." - Tim Peters

  4) We all agreed that "2" + "23" will not equal "25".

--david ascher




From Greg.Wilson at baltimore.com  Mon Mar 12 23:29:31 2001
From: Greg.Wilson at baltimore.com (Greg Wilson)
Date: Mon, 12 Mar 2001 17:29:31 -0500
Subject: [Python-Dev] more Solaris extension grief
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC593@nsamcanms1.ca.baltimore.com>

I just updated my copy of Python from the CVS repo,
rebuilt on Solaris 5.8, and tried to compile an
extension that is built on top of C++.  I am now
getting lots 'n' lots of error messages as shown
below.  My compile line is:

gcc -shared  ./PyEnforcer.o  -L/home/gvwilson/cozumel/merlot/enforcer
-lenforcer -lopenssl -lstdc++  -o ./PyEnforcer.so

Has anyone seen this problem before?  It does *not*
occur on Linux, using the same version of g++.

Greg

p.s. I configured Python --with-gcc=g++

Text relocation remains                         referenced
    against symbol                  offset      in file
istream type_info function          0x1c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
istream type_info function          0x18
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdiostream.o
)
_IO_stderr_buf                      0x2c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_stderr_buf                      0x28
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_default_xsputn                  0xc70
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
_IO_default_xsputn                  0xa4
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(streambuf.o)
lseek                               0xa74
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
_IO_str_init_readonly               0x620
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
_IO_stdout_buf                      0x24
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_stdout_buf                      0x38
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_file_xsputn                     0x43c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filebuf.o)
fstat                               0xa8c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
streambuf::sputbackc(char)          0x68c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x838
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x8bc
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x1b4c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x1b80
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x267c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x26f8
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
_IO_file_stat                       0x40c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filebuf.o)
_IO_setb                            0x844
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(genops.o)
_IO_setb                            0x210
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strops.o)
_IO_setb                            0xa8
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filedoalloc.o
)
... and so on and so on ...



From barry at digicool.com  Tue Mar 13 00:15:15 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:15:15 -0500
Subject: [Python-Dev] Revive the types sig? 
References: <jeremy@alum.mit.edu>
	<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103120711.AAA09711@localhost.localdomain>
Message-ID: <15021.22659.616556.298360@anthem.wooz.org>

>>>>> "UO" == Uche Ogbuji <uche.ogbuji at fourthought.com> writes:

    UO> I know this isn't the types SIG and all, but since it has come
    UO> up here, I'd like to (once again) express my violent
    UO> disagreement with the efforts to add static typing to Python.
    UO> After this, I won't pursue the thread further here.

Thank you Uche!  I couldn't agree more, and will also try to follow
your example, at least until we see much more concrete proposals from
the types-sig.  I just want to make a few comments for the record.

First, it seemed to me that the greatest push for static type
annotations at IPC9 was from the folks implementing Python on top of
frameworks other than C.  I know from my own experiences that there is
the allure of improved performance, e.g. JPython, given type hints
available to the compiler.  While perhaps a laudable goal, this
doesn't seem to be a stated top priority of Paul's.

Second, if type annotations are to be seriously considered for
inclusion in Python, I think we as a community need considerable
experience with a working implementation.  Yes, we need PEPs and specs
and such, but we need something real and complete that we can play
with, /without/ having to commit to its acceptance in mainstream
Python.  Therefore, I think it'll be very important for type
annotation proponents to figure out a way to allow people to see and
play with an implementation in an experimental way.

This might mean an extensive set of patches, a la Stackless.  After
seeing and talking to Neil and Andrew about PTL and Quixote, I think
there might be another way.  It seems that their approach might serve
as a framework for experimental Python syntaxes with minimal overhead.
If I understand their work correctly, they have their own compiler
which is built on Jeremy's tools, and which accepts a modified Python
grammar, generating different but compatible bytecode sequences.
E.g., their syntax has a "template" keyword approximately equivalent
to "def" and they do something different with bare strings left on the
stack.

The key trick is that it all hooks together with an import hook so
normal Python code doesn't need to know anything about the mechanics
of PTL compilation.  Given a homepage.ptl file, they just do an
"import homepage" and this gets magically transformed into a .ptlc
file and normal Python objects.

If I've got this correct, it seems like it would be a powerful tool
for playing with alternative Python syntaxes.  Ideally, the same
technique would allow the types-sig folks to create a working
implementation that would require only the installation of an import
hook.  This would let them build their systems with type annotation
and prove to the skeptical among us of their overwhelming benefit.

Cheers,
-Barry



From guido at digicool.com  Tue Mar 13 00:19:39 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:19:39 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Mon, 12 Mar 2001 14:01:02 PST."
             <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> 
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> 
Message-ID: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>

> With apologies for the delay, here are my notes from the numeric coercion
> day.
> 
> There were many topics which were defined by the Timbot to be within the
> scope of the discussion.  Those included:
> 
>   - Whether numbers should be rationals / binary FP / decimal FP / etc.
>   - Whether there should be support for both exact and inexact computations
>   - What division means.
> 
> There were few "deliverables" at the end of the day, mostly a lot of
> consternation on all sides of the multi-faceted divide, with the impression
> in at least this observer's mind that there are few things more
> controversial than what numbers are for and how they should work.  A few
> things emerged, however:
> 
>   0) There is tension between making math in Python 'understandable' to a
> high-school kid and making math in Python 'useful' to an engineer/scientist.
> 
>   1) We could consider using the new warnings framework for noting things
> which are "dangerous" to do with numbers, such as:
> 
>        - noting that an operation on 'plain' ints resulted in a 'long'
> result.
>        - using == when comparing floating point numbers
> 
>   2) The Fortran notion of "Kind" as an orthogonal notion to "Type" may make
> sense (details to be fleshed out).
> 
>   3) Pythonistas are good at quotes:
> 
>      "You cannot stop people from complaining, but you can influence what
> they
>       complain about." - Tim Peters
> 
>      "The only problem with using rationals for money is that money, is,
>       well, not rational." - Moshe Zadka
> 
>      "Don't get too apoplectic about this." - Tim Peters
> 
>   4) We all agreed that "2" + "23" will not equal "25".
> 
> --david ascher

Thanks for the notes.  I couldn't be at the meeting, but I attended a
post-meeting lunch roundtable, where much of the above confusion was
reiterated for my convenience.  Moshe's three or four PEPs also came
out of that.  One thing we *could* agree to there, after I pressed
some people: 1/2 should return 0.5.  Possibly 1/2 should not be a
binary floating point number -- but then 0.5 shouldn't either, and
whatever happens, these (1/2 and 0.5) should have the same type, be it
rational, binary float, or decimal float.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 13 00:23:06 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:23:06 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: Your message of "Mon, 12 Mar 2001 16:47:05 EST."
             <20010312164705.C641@devserv.devel.redhat.com> 
References: <20010312164705.C641@devserv.devel.redhat.com> 
Message-ID: <200103122323.SAA22876@cj20424-a.reston1.va.home.com>

> We've been auditing various code lately to check for /tmp races and so
> on.  It seems that tempfile.mktemp() is used throughout the Python
> library.  While nice and portable, tempfile.mktemp() is vulnerable to
> races.
> 
> The TemporaryFile does a nice job of handling the filename returned by
> mktemp properly, but there are many modules that don't.
> 
> Should I attempt to patch them all to use TemporaryFile?  Or set up
> conditional use of mkstemp on those systems that support it?

Matt, please be sure to look at the 2.1 CVS tree.  I believe that
we've implemented some changes that may make mktemp() better behaved.

If you find that this is still not good enough, please feel free to
submit a patch to SourceForge that fixes the uses of mktemp() --
insofar possible.  (I know e.g. the test suite has some places where
mktemp() is used as the name of a dbm file.)

Thanks for looking into this!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From esr at snark.thyrsus.com  Tue Mar 13 00:36:00 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Mon, 12 Mar 2001 18:36:00 -0500
Subject: [Python-Dev] CML2 compiler slowness
Message-ID: <200103122336.f2CNa0W28998@snark.thyrsus.com>

(Copied to python-dev for informational purposes.)

I added some profiling apparatus to the CML2 compiler and investigated
mec's reports of a twenty-second startup.  I've just released the
version with profiling as 0.9.3, with fixes for all known bugs.

Nope, it's not the quadratic-time validation pass that's eating all
the cycles.  It's the expression parser I generated with John
Aycock's SPARK toolkit -- that's taking up an average of 26 seconds
out of an average 28-second runtime.

While I was at PC9 last week somebody mumbled something about Aycock's
code being cubic in time.  I should have heard ominous Jaws-style
theme music at that point, because that damn Earley-algorithm parser
has just swum up from the deeps and bitten me on the ass.

Looks like I'm going to have to hand-code an expression parser for
this puppy to speed it up at all.  *groan*  Anybody over on the Python
side know of a faster alternative LL or LR(1) parser generator or
factory class?
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

It will be of little avail to the people, that the laws are made by
men of their own choice, if the laws be so voluminous that they cannot
be read, or so incoherent that they cannot be understood; if they be
repealed or revised before they are promulgated, or undergo such
incessant changes that no man, who knows what the law is to-day, can
guess what it will be to-morrow. Law is defined to be a rule of
action; but how can that be a rule, which is little known, and less
fixed?
	-- James Madison, Federalist Papers 62



From guido at digicool.com  Tue Mar 13 00:32:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:32:37 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: Your message of "Mon, 12 Mar 2001 22:04:31 +0100."
             <20010312220425.T404@xs4all.nl> 
References: <20010312220425.T404@xs4all.nl> 
Message-ID: <200103122332.SAA22948@cj20424-a.reston1.va.home.com>

> Contrary to Guido's keynote last week <wink> there are still two warts I
> know of in the current CPython. One is the fact that keywords cannot be used
> as identifiers anywhere, the other is the fact that 'continue' can still not
> be used inside a 'finally' clause. If I remember correctly, the latter isn't
> too hard to fix, it just needs a decision on what it should do :)
> 
> Currently, falling out of a 'finally' block will reraise the exception, if
> any. Using 'return' and 'break' will drop the exception and continue on as
> usual. However, that makes sense (imho) mostly because 'break' will continue
> past the try/finally block and 'return' will break out of the function
> altogether. Neither have a chance of reentering the try/finally block
> altogether. I'm not sure if that would make sense for 'continue' inside
> 'finally'.
> 
> On the other hand, I'm not sure if it makes sense for 'break' to continue
> but for 'continue' to break. :)

If you can fix it, the semantics you suggest are reasonable: continue
loses the exception and continues the loop.

> As for the other wart, I still want to fix it, but I'm not sure when I get
> the chance to grok the parser-generator enough to actually do it :) 

Yes, that was on the list once but got dropped.  You might want to get
together with Finn and Samuele to see what their rules are.  (They
allow the use of some keywords at least as keyword=expression
arguments and as object.attribute names.)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 13 00:41:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:41:01 -0500
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: Your message of "Mon, 12 Mar 2001 18:15:15 EST."
             <15021.22659.616556.298360@anthem.wooz.org> 
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain>  
            <15021.22659.616556.298360@anthem.wooz.org> 
Message-ID: <200103122341.SAA23054@cj20424-a.reston1.va.home.com>

> >>>>> "UO" == Uche Ogbuji <uche.ogbuji at fourthought.com> writes:
> 
>     UO> I know this isn't the types SIG and all, but since it has come
>     UO> up here, I'd like to (once again) express my violent
>     UO> disagreement with the efforts to add static typing to Python.
>     UO> After this, I won't pursue the thread further here.
> 
> Thank you Uche!  I couldn't agree more, and will also try to follow
> your example, at least until we see much more concrete proposals from
> the types-sig.  I just want to make a few comments for the record.

Barry, you were supposed to throw a brick at me with this content at
the meeting, on Eric's behalf.  Why didn't you?  I was waiting for
someone to explain why this was a big idea, but everybody kept their
face shut!  :-(

> First, it seemed to me that the greatest push for static type
> annotations at IPC9 was from the folks implementing Python on top of
> frameworks other than C.  I know from my own experiences that there is
> the allure of improved performance, e.g. JPython, given type hints
> available to the compiler.  While perhaps a laudable goal, this
> doesn't seem to be a stated top priority of Paul's.
> 
> Second, if type annotations are to be seriously considered for
> inclusion in Python, I think we as a community need considerable
> experience with a working implementation.  Yes, we need PEPs and specs
> and such, but we need something real and complete that we can play
> with, /without/ having to commit to its acceptance in mainstream
> Python.  Therefore, I think it'll be very important for type
> annotation proponents to figure out a way to allow people to see and
> play with an implementation in an experimental way.

+1

> This might mean an extensive set of patches, a la Stackless.  After
> seeing and talking to Neil and Andrew about PTL and Quixote, I think
> there might be another way.  It seems that their approach might serve
> as a framework for experimental Python syntaxes with minimal overhead.
> If I understand their work correctly, they have their own compiler
> which is built on Jeremy's tools, and which accepts a modified Python
> grammar, generating different but compatible bytecode sequences.
> E.g., their syntax has a "template" keyword approximately equivalent
> to "def" and they do something different with bare strings left on the
> stack.

I'm not sure this is viable.  I believe Jeremy's compiler package
actually doesn't have its own parser -- it uses the parser module
(which invokes Python's standard parse) and then transmogrifies the
parse tree into something more usable, but it doesn't change the
syntax!  Quixote can get away with this because their only change
is giving a different meaning to stand-alone string literals.  But for
type annotations this doesn't give enough freedom, I expect.

> The key trick is that it all hooks together with an import hook so
> normal Python code doesn't need to know anything about the mechanics
> of PTL compilation.  Given a homepage.ptl file, they just do an
> "import homepage" and this gets magically transformed into a .ptlc
> file and normal Python objects.

That would be nice, indeed.

> If I've got this correct, it seems like it would be a powerful tool
> for playing with alternative Python syntaxes.  Ideally, the same
> technique would allow the types-sig folks to create a working
> implementation that would require only the installation of an import
> hook.  This would let them build their systems with type annotation
> and prove to the skeptical among us of their overwhelming benefit.

+1

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Tue Mar 13 00:47:14 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 00:47:14 +0100
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:19:39PM -0500
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <20010313004714.U404@xs4all.nl>

On Mon, Mar 12, 2001 at 06:19:39PM -0500, Guido van Rossum wrote:

> One thing we *could* agree to [at lunch], after I pressed
> some people: 1/2 should return 0.5. Possibly 1/2 should not be a
> binary floating point number -- but then 0.5 shouldn't either, and
> whatever happens, these (1/2 and 0.5) should have the same type, be it
> rational, binary float, or decimal float.

Actually, I didn't quite agree, and still don't quite agree (I'm just not
happy with this 'automatic upgrading of types') but I did agreed to differ
in opinion and bow to your wishes ;) I did agree that if 1/2 should not
return 0, it should return 0.5 (an object of the same type as
0.5-the-literal.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Tue Mar 13 00:48:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:48:00 -0500
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: Your message of "Mon, 12 Mar 2001 18:41:01 EST."
             <200103122341.SAA23054@cj20424-a.reston1.va.home.com> 
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org>  
            <200103122341.SAA23054@cj20424-a.reston1.va.home.com> 
Message-ID: <200103122348.SAA23123@cj20424-a.reston1.va.home.com>

> Barry, you were supposed to throw a brick at me with this content at
> the meeting, on Eric's behalf.  Why didn't you?  I was waiting for
> someone to explain why this was a big idea, but everybody kept their
                                    ^^^^^^^^
> face shut!  :-(

/big idea/ -> /bad idea/ :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Tue Mar 13 00:48:21 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:48:21 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl>
	<200103122332.SAA22948@cj20424-a.reston1.va.home.com>
Message-ID: <15021.24645.357064.856281@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> Yes, that was on the list once but got dropped.  You might
    GvR> want to get together with Finn and Samuele to see what their
    GvR> rules are.  (They allow the use of some keywords at least as
    GvR> keyword=expression arguments and as object.attribute names.)

I'm actually a little surprised that the "Jython vs. CPython"
differences page doesn't describe this (or am I missing it?):

    http://www.jython.org/docs/differences.html

I thought it used to.

IIRC, keywords were allowed if there was no question of it introducing
a statement.  So yes, keywords were allowed after the dot in attribute
lookups, and as keywords in argument lists, but not as variable names
on the lhs of an assignment (I don't remember if they were legal on
the rhs, but it seems like that ought to be okay, and is actually
necessary if you allow them argument lists).

It would eliminate much of the need for writing obfuscated code like
"class_" or "klass".

-Barry



From barry at digicool.com  Tue Mar 13 00:52:57 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:52:57 -0500
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
	<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103120711.AAA09711@localhost.localdomain>
	<15021.22659.616556.298360@anthem.wooz.org>
	<200103122341.SAA23054@cj20424-a.reston1.va.home.com>
Message-ID: <15021.24921.998693.156809@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> Barry, you were supposed to throw a brick at me with this
    GvR> content at the meeting, on Eric's behalf.  Why didn't you?  I
    GvR> was waiting for someone to explain why this was a big idea,
    GvR> but everybody kept their face shut!  :-(

I actually thought I had, but maybe it was a brick made of bouncy spam
instead of concrete. :/

    GvR> I'm not sure this is viable.  I believe Jeremy's compiler
    GvR> package actually doesn't have its own parser -- it uses the
    GvR> parser module (which invokes Python's standard parse) and
    GvR> then transmogrifies the parse tree into something more
    GvR> usable, but it doesn't change the syntax!  Quixote can get
    GvR> away with this because their only change is giving a
    GvR> different meaning to stand-alone string literals.  But for
    GvR> type annotations this doesn't give enough freedom, I expect.

I thought PTL definitely included a "template" declaration keyword, a
la, def, so they must have some solution here.  MEMs guys?

-Barry



From thomas at xs4all.net  Tue Mar 13 01:01:45 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 01:01:45 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15021.24645.357064.856281@anthem.wooz.org>; from barry@digicool.com on Mon, Mar 12, 2001 at 06:48:21PM -0500
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org>
Message-ID: <20010313010145.V404@xs4all.nl>

On Mon, Mar 12, 2001 at 06:48:21PM -0500, Barry A. Warsaw wrote:
> >>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

>     GvR> Yes, that was on the list once but got dropped.  You might
>     GvR> want to get together with Finn and Samuele to see what their
>     GvR> rules are.  (They allow the use of some keywords at least as
>     GvR> keyword=expression arguments and as object.attribute names.)

> I'm actually a little surprised that the "Jython vs. CPython"
> differences page doesn't describe this (or am I missing it?):

Nope, it's not in there. It should be under the Syntax heading.

>     http://www.jython.org/docs/differences.html

Funnily enough:

"Jython supports continue in a try clause. CPython should be fixed - but
don't hold your breath."

It should be updated for CPython 2.1 when it's released ? :-)

[*snip* how Barry thinks he remembers how Jython might handle keywords]

> It would eliminate much of the need for writing obfuscated code like
> "class_" or "klass".

Yup. That's one of the reasons I brought it up. (That, and Mark mentioned
it's actually necessary for .NET Python to adhere to 'the spec'.)

Holding-my-breath-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nas at arctrix.com  Tue Mar 13 01:07:30 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 12 Mar 2001 16:07:30 -0800
Subject: [Python-Dev] parsers and import hooks [Was: Revive the types sig?]
In-Reply-To: <200103122341.SAA23054@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:41:01PM -0500
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org> <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
Message-ID: <20010312160729.A2976@glacier.fnational.com>

[Recipient addresses brutally slashed.]

On Mon, Mar 12, 2001 at 06:41:01PM -0500, Guido van Rossum wrote:
> I'm not sure this is viable.  I believe Jeremy's compiler package
> actually doesn't have its own parser -- it uses the parser module
> (which invokes Python's standard parse) and then transmogrifies the
> parse tree into something more usable, but it doesn't change the
> syntax!

Yup.  Having a more flexible Python-like parser would be cool but
I don't think I'd ever try to implement it.  I know Christian
Tismer wants one.  Maybe he will volunteer. :-)

[On using import hooks to load modules with modified syntax/semantics]
> That would be nice, indeed.

Its nice if you can get it to work.  import hooks are a bitch to
write and are slow.  Also, you get trackbacks from hell.  It
would be nice if there were higher level hooks in the
interpreter.  imputil.py did no do the trick for me after
wrestling with it for hours.

  Neil



From nkauer at users.sourceforge.net  Tue Mar 13 01:09:10 2001
From: nkauer at users.sourceforge.net (Nikolas Kauer)
Date: Mon, 12 Mar 2001 18:09:10 -0600 (CST)
Subject: [Python-Dev] syntax exploration tool
In-Reply-To: <15021.22659.616556.298360@anthem.wooz.org>
Message-ID: <Pine.LNX.4.10.10103121801530.7351-100000@falcon.physics.wisc.edu>

I'd volunteer to put in time and help create such a tool.  If someone 
sufficiently knowledgeable decides to go ahead with such a project 
please let me know.

---
Nikolas Kauer <nkauer at users.sourceforge.net>

> Second, if type annotations are to be seriously considered for
> inclusion in Python, I think we as a community need considerable
> experience with a working implementation.  Yes, we need PEPs and specs
> and such, but we need something real and complete that we can play
> with, /without/ having to commit to its acceptance in mainstream
> Python.  Therefore, I think it'll be very important for type
> annotation proponents to figure out a way to allow people to see and
> play with an implementation in an experimental way.




From nas at arctrix.com  Tue Mar 13 01:13:04 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 12 Mar 2001 16:13:04 -0800
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <15021.24921.998693.156809@anthem.wooz.org>; from barry@digicool.com on Mon, Mar 12, 2001 at 06:52:57PM -0500
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org> <200103122341.SAA23054@cj20424-a.reston1.va.home.com> <15021.24921.998693.156809@anthem.wooz.org>
Message-ID: <20010312161304.B2976@glacier.fnational.com>

On Mon, Mar 12, 2001 at 06:52:57PM -0500, Barry A. Warsaw wrote:
> I thought PTL definitely included a "template" declaration keyword, a
> la, def, so they must have some solution here.  MEMs guys?

The correct term is "hack".  We do a re.sub on the text of the
module.  I considered building a new parsermodule with def
changed to template but haven't had time yet.  I think the
dominate cost when importing a PTL module is due stat() calls
driven by hairy Python code.

  Neil



From jeremy at alum.mit.edu  Tue Mar 13 01:14:47 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 19:14:47 -0500 (EST)
Subject: [Python-Dev] comments on PEP 219
Message-ID: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>

Here are some comments on Gordon's new draft of PEP 219 and the
stackless dev day discussion at Spam 9.

I left the dev day discussion with the following takehome message:
There is a tension between Stackless Python on the one hand and making
Python easy to embed in and extend with C programs on the other hand.
The PEP describes this as the major difficulty with C Python.  I won't
repeat the discussion of the problem there.

I would like to seem a somewhat more detailed discussion of this in
the PEP.  I think it's an important issue to work out before making a
decision about a stack-light patch.

The problem of nested interpreters and the C API seems to come up in
several ways.  These are all touched on in the PEP, but not in much
detail.  This message is mostly a request for more detail :-).

  - Stackless disallows transfer out of a nested interpreter.  (It
    has, too; anything else would be insane.)  Therefore, the
    specification for microthreads &c. will be complicated by a
    listing of the places where control transfers are not possible.
    The PEP says this is not ideal, but not crippling.  I'd like to
    see an actual spec for where it's not allowed in pure Python.  It
    may not be crippling, but it may be a tremendous nuisance in
    practice; e.g. remember that __init__ calls create a critical
    section.

  - If an application makes use of C extensions that do create nested
    interpreters, they will make it even harder to figure out when
    Python code is executing in a nested interpreter.  For a large
    systems with several C extensions, this could be complicated.  I
    presume, therefore, that there will be a C API for playing nice
    with stackless.  I'd like to see a PEP that discusses what this C
    API would look like.

  - Would allow of the internal Python calls that create nested
    functions be replaced?  I'm thinking of things like
    PySequence_Fast() and the ternary_op() call in abstract.c.  How
    hard will it be to convert all these functions to be stackless?
    How many functions are affected?  And how many places are they
    called from?

  - What is the performance impact of adding the stackless patches?  I
    think Christian mentioned a 10% slowdown at dev day, which doesn't
    sound unreasonable.  Will reworking the entire interpreter to be
    stackless make that slowdown larger or smaller?

One other set of issues, that is sort-of out of bounds for this
particular PEP, is what control features do we want that can only be
implemented with stackless.  Can we implement generators or coroutines
efficiently without a stackless approach?

Jeremy



From aycock at csc.UVic.CA  Tue Mar 13 01:13:01 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Mon, 12 Mar 2001 16:13:01 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <200103130013.QAA13925@valdes.csc.UVic.CA>

| From esr at snark.thyrsus.com Mon Mar 12 15:14:33 2001
| It's the expression parser I generated with John
| Aycock's SPARK toolkit -- that's taking up an average of 26 seconds
| out of an average 28-second runtime.
|
| While I was at PC9 last week somebody mumbled something about Aycock's
| code being cubic in time.  I should have heard ominous Jaws-style
| theme music at that point, because that damn Earley-algorithm parser
| has just swum up from the deeps and bitten me on the ass.

Eric:

You were partially correctly informed.  The time complexity of Earley's
algorithm is O(n^3) in the worst case, that being the meanest, nastiest,
most ambiguous context-free grammar you could possibly think of.  Unless
you're parsing natural language, this won't happen.  For any unambiguous
grammar, the worst case drops to O(n^2), and for a set of grammars which
loosely coincides with the LR(k) grammars, the complexity drops to O(n).

In other words, it's linear for most programming language grammars.  Now
the overhead for a general parsing algorithm like Earley's is of course
greater than that of a much more specialized algorithm, like LALR(1).

The next version of SPARK uses some of my research work into Earley's
algorithm and improves the speed quite dramatically.  It's not all
ready to go yet, but I can send you my working version which will give
you some idea of how fast it'll be for CML2.  Also, I assume you're
supplying a typestring() method to the parser class?  That speeds things
up as well.

John



From jepler at inetnebr.com  Tue Mar 13 00:38:42 2001
From: jepler at inetnebr.com (Jeff Epler)
Date: Mon, 12 Mar 2001 17:38:42 -0600
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <15021.22659.616556.298360@anthem.wooz.org>
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org>
Message-ID: <20010312173842.A3962@potty.housenet>

On Mon, Mar 12, 2001 at 06:15:15PM -0500, Barry A. Warsaw wrote:
> This might mean an extensive set of patches, a la Stackless.  After
> seeing and talking to Neil and Andrew about PTL and Quixote, I think
> there might be another way.  It seems that their approach might serve
> as a framework for experimental Python syntaxes with minimal overhead.
> If I understand their work correctly, they have their own compiler
> which is built on Jeremy's tools, and which accepts a modified Python
> grammar, generating different but compatible bytecode sequences.
> E.g., their syntax has a "template" keyword approximately equivalent
> to "def" and they do something different with bare strings left on the
> stack.

See also my project, "M?bius python".[1]

I've used a lot of existing pieces, including the SPARK toolkit,
Tools/compiler, and Lib/tokenize.py.

The end result is a set of Python classes and functions that implement the
whole tokenize/parse/build AST/bytecompile process.  To the extent that
each component is modifable or subclassable, Python's grammar and semantics
can be extended.  For example, new keywords and statement types can be
introduced (such as Quixote's 'tmpl'), new operators can be introduced
(such as |absolute value|), along with the associated semantics.

(At this time, there is only a limited potential to modify the tokenizer)

One big problem right now is that M?bius Python only implements the
1.5.2 language subset.

The CVS tree on sourceforge is not up to date, but the tree on my system is
pretty complete, lacking only documentation.  Unfortunately, even a small
modification requires a fair amount of code (My 'absolute value' extension
is 91 lines plus comments, empty lines, and imports)

As far as I know, all that Quixote does at the syntax level is a few
regular expression tricks.  M?bius Python is much more than this.

Jeff
[1] http://mobiuspython.sourceforge.net/



From tim.one at home.com  Tue Mar 13 02:14:34 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 20:14:34 -0500
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDLJFAA.tim.one@home.com>

FYI, Fredrik's regexp engine also supports two undocumented match-object
attributes that could be used to speed SPARK lexing, and especially when
there are many token types (gives a direct index to the matching alternative
instead of making you do a linear search for it -- that can add up to a major
win).  Simple example below.

Python-Dev, this has been in there since 2.0 (1.6?  unsure).  I've been using
it happily all along.  If Fredrik is agreeable, I'd like to see this
documented for 2.1, i.e. made an officially supported part of Python's regexp
facilities.

-----Original Message-----
From: Tim Peters [mailto:tim.one at home.com]
Sent: Monday, March 12, 2001 6:37 PM
To: python-list at python.org
Subject: RE: Help with Regular Expressions

[Raymond Hettinger]
> Is there an idiom for how to use regular expressions for lexing?
>
> My attempt below is unsatisfactory because it has to filter the
> entire match group dictionary to find-out which token caused
> the match. This approach isn't scalable because every token
> match will require a loop over all possible token types.
>
> I've fiddled with this one for hours and can't seem to find a
> direct way get a group dictionary that contains only matches.

That's because there isn't a direct way; best you can do now is seek to order
your alternatives most-likely first (which is a good idea anyway, given the
way the engine works).

If you peek inside sre.py (2.0 or later), you'll find an undocumented class
Scanner that uses the undocumented .lastindex attribute of match objects.
Someday I hope this will be the basis for solving exactly the problem you're
facing.  There's also an undocumented .lastgroup attribute:

Python 2.1b1 (#11, Mar  2 2001, 11:23:29) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
IDLE 0.6 -- press F1 for help
>>> import re
>>> pat = re.compile(r"(?P<a>aa)|(?P<b>bb)")
>>> m = pat.search("baab")
>>> m.lastindex  # numeral of group that matched
1
>>> m.lastgroup  # name of group that matched
'a'
>>> m = pat.search("ababba")
>>> m.lastindex
2
>>> m.lastgroup
'b'
>>>

They're not documented yet because we're not yet sure whether we want to make
them permanent parts of the language.  So feel free to play, but don't count
on them staying around forever.  If you like them, drop a note to the effbot
saying so.

for-more-docs-read-the-source-code-ly y'rs  - tim




From paulp at ActiveState.com  Tue Mar 13 02:45:51 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 17:45:51 -0800
Subject: [Python-Dev] FOLLOWUPS!!!!!!!
References: <Pine.LNX.4.10.10103121801530.7351-100000@falcon.physics.wisc.edu>
Message-ID: <3AAD7BCF.4D4F69B7@ActiveState.com>

Please keep follow-ups to just types-sig. I'm very sorry I cross-posted
in the beginning and I apologize to everyone on multiple lists. I did
direct people to follow up only to types-sig but I should have used a
header....or separate posts!

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From ping at lfw.org  Tue Mar 13 02:56:27 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 12 Mar 2001 17:56:27 -0800 (PST)
Subject: [Python-Dev] parsers and import hooks
In-Reply-To: <20010312160729.A2976@glacier.fnational.com>
Message-ID: <Pine.LNX.4.10.10103121755110.13108-100000@skuld.kingmanhall.org>

On Mon, 12 Mar 2001, Neil Schemenauer wrote:
> 
> Its nice if you can get it to work.  import hooks are a bitch to
> write and are slow.  Also, you get trackbacks from hell.  It
> would be nice if there were higher level hooks in the
> interpreter.

Let me chime in with a request, please, for a higher-level find_module()
that understands packages -- or is there already some way to emulate the 
file-finding behaviour of "import x.y.z" that i don't know about?



-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From tim.one at home.com  Tue Mar 13 03:07:46 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 21:07:46 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: <20010312164705.C641@devserv.devel.redhat.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>

[Matt Wilson]
> We've been auditing various code lately to check for /tmp races and so
> on.  It seems that tempfile.mktemp() is used throughout the Python
> library.  While nice and portable, tempfile.mktemp() is vulnerable to
> races.
> ...

Adding to what Guido said, the 2.1 mktemp() finally bites the bullet and uses
a mutex to ensure that no two threads (within a process) can ever generate
the same filename.  The 2.0 mktemp() was indeed subject to races in this
respect.  Freedom from cross-process races relies on using the pid in the
filename too.




From paulp at ActiveState.com  Tue Mar 13 03:18:13 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 18:18:13 -0800
Subject: [Python-Dev] CML2 compiler slowness
References: <200103122336.f2CNa0W28998@snark.thyrsus.com>
Message-ID: <3AAD8365.285CCCFE@ActiveState.com>

"Eric S. Raymond" wrote:
> 
> ...
> 
> Looks like I'm going to have to hand-code an expression parser for
> this puppy to speed it up at all.  *groan*  Anybody over on the Python
> side know of a faster alternative LL or LR(1) parser generator or
> factory class?

I tried to warn you about those Early-parsers. :)

  http://mail.python.org/pipermail/python-dev/2000-July/005321.html


Here are some pointers to other solutions:

Martel: http://www.biopython.org/~dalke/Martel

flex/bison: http://www.cs.utexas.edu/users/mcguire/software/fbmodule/

kwparsing: http://www.chordate.com/kwParsing/

mxTextTools: http://www.lemburg.com/files/python/mxTextTools.html

metalang: http://www.tibsnjoan.demon.co.uk/mxtext/Metalang.html

plex: http://www.cosc.canterbury.ac.nz/~greg/python/Plex/

pylr: http://starship.python.net/crew/scott/PyLR.html

SimpleParse: (offline?)

mcf tools: (offline?)

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From thomas at xs4all.net  Tue Mar 13 03:23:02 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 03:23:02 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Include frameobject.h,2.30,2.31
In-Reply-To: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>; from jhylton@usw-pr-web.sourceforge.net on Mon, Mar 12, 2001 at 05:58:23PM -0800
References: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <20010313032302.W404@xs4all.nl>

On Mon, Mar 12, 2001 at 05:58:23PM -0800, Jeremy Hylton wrote:
> Modified Files:
> 	frameobject.h 
> Log Message:

> There is also a C API change: PyFrame_New() is reverting to its
> pre-2.1 signature.  The change introduced by nested scopes was a
> mistake.  XXX Is this okay between beta releases?

It is definately fine by me ;-) And Guido's reason for not caring about it
breaking ("noone uses it") applies equally well to unbreaking it between
beta releases.

Backward-bigot-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From paulp at ActiveState.com  Tue Mar 13 04:01:14 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 19:01:14 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
References: <200103130013.QAA13925@valdes.csc.UVic.CA>
Message-ID: <3AAD8D7A.3634BC56@ActiveState.com>

John Aycock wrote:
> 
> ...
> 
> For any unambiguous
> grammar, the worst case drops to O(n^2), and for a set of grammars 
> which loosely coincides with the LR(k) grammars, the complexity drops 
> to O(n).

I'd say: "it's linear for optimal grammars for most programming
languages." But it doesn't warn you when you are making a "bad grammar"
(not LR(k)) so things just slow down as you add rules...

Is there a tutorial about how to make fast Spark grammars or should I go
back and re-read my compiler construction books?

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From barry at digicool.com  Tue Mar 13 03:56:42 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 21:56:42 -0500
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
	<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103120711.AAA09711@localhost.localdomain>
	<15021.22659.616556.298360@anthem.wooz.org>
	<200103122341.SAA23054@cj20424-a.reston1.va.home.com>
	<15021.24921.998693.156809@anthem.wooz.org>
	<20010312161304.B2976@glacier.fnational.com>
Message-ID: <15021.35946.606279.267593@anthem.wooz.org>

>>>>> "NS" == Neil Schemenauer <nas at arctrix.com> writes:

    >> I thought PTL definitely included a "template" declaration
    >> keyword, a la, def, so they must have some solution here.  MEMs
    >> guys?

    NS> The correct term is "hack".  We do a re.sub on the text of the
    NS> module.  I considered building a new parsermodule with def
    NS> changed to template but haven't had time yet.  I think the
    NS> dominate cost when importing a PTL module is due stat() calls
    NS> driven by hairy Python code.

Ah, good to know, thanks.  I definitely think it would be A Cool Thing
if one could build a complete Python parser and compiler in Python.
Kind of along the lines of building the interpreter main loop in
Python as much as possible.  I know that /I'm/ not going to have any
time to contribute though (and others have more and better experience
in this area than I do).

-Barry



From paulp at ActiveState.com  Tue Mar 13 04:09:21 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 19:09:21 -0800
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
		<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
		<200103120711.AAA09711@localhost.localdomain>
		<15021.22659.616556.298360@anthem.wooz.org>
		<200103122341.SAA23054@cj20424-a.reston1.va.home.com>
		<15021.24921.998693.156809@anthem.wooz.org>
		<20010312161304.B2976@glacier.fnational.com> <15021.35946.606279.267593@anthem.wooz.org>
Message-ID: <3AAD8F61.C61CAC85@ActiveState.com>

"Barry A. Warsaw" wrote:
> 
>...
> 
> Ah, good to know, thanks.  I definitely think it would be A Cool Thing
> if one could build a complete Python parser and compiler in Python.
> Kind of along the lines of building the interpreter main loop in
> Python as much as possible.  I know that /I'm/ not going to have any
> time to contribute though (and others have more and better experience
> in this area than I do).

I'm surprised that there are dozens of compiler compilers written in
Python but few people stepped forward to say that theirs supports Python
itself. mxTextTools has a Python parser...does anyone know how good it
is?

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From esr at thyrsus.com  Tue Mar 13 04:11:02 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 12 Mar 2001 22:11:02 -0500
Subject: [Python-Dev] Re: [kbuild-devel] Re: CML2 compiler slowness
In-Reply-To: <200103130013.QAA13925@valdes.csc.UVic.CA>; from aycock@csc.UVic.CA on Mon, Mar 12, 2001 at 04:13:01PM -0800
References: <200103130013.QAA13925@valdes.csc.UVic.CA>
Message-ID: <20010312221102.A31473@thyrsus.com>

John Aycock <aycock at csc.UVic.CA>:
> The next version of SPARK uses some of my research work into Earley's
> algorithm and improves the speed quite dramatically.  It's not all
> ready to go yet, but I can send you my working version which will give
> you some idea of how fast it'll be for CML2.

I'd like to see it.

>                                             Also, I assume you're
> supplying a typestring() method to the parser class?  That speeds things
> up as well.

I supplied one.  The expression parser promptly dropped from 92% of
the total compiler run time to 87%, a whole 5% of improvement.

To paraphrase a famous line from E.E. "Doc" Smith, "I could eat a handful
of chad and *puke* a faster parser than that..."
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

[W]hat country can preserve its liberties, if its rulers are not
warned from time to time that [the] people preserve the spirit of
resistance?  Let them take arms...The tree of liberty must be
refreshed from time to time, with the blood of patriots and tyrants.
	-- Thomas Jefferson, letter to Col. William S. Smith, 1787 



From msw at redhat.com  Tue Mar 13 04:08:42 2001
From: msw at redhat.com (Matt Wilson)
Date: Mon, 12 Mar 2001 22:08:42 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>; from tim.one@home.com on Mon, Mar 12, 2001 at 09:07:46PM -0500
References: <20010312164705.C641@devserv.devel.redhat.com> <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>
Message-ID: <20010312220842.A14634@devserv.devel.redhat.com>

Right, but this isn't the problem that I'm describing.  Because mktemp
just return a "checked" filename, it is vulnerable to symlink attacks.
Python programs run as root have a small window of opportunity between
when mktemp checks for the existence of the temp file and when the
function calling mktemp actually uses it.

So, it's hostile out-of-process attacks I'm worrying about, and the
recent CVS changes don't address that.

Cheers,

Matt

On Mon, Mar 12, 2001 at 09:07:46PM -0500, Tim Peters wrote:
> 
> Adding to what Guido said, the 2.1 mktemp() finally bites the bullet and uses
> a mutex to ensure that no two threads (within a process) can ever generate
> the same filename.  The 2.0 mktemp() was indeed subject to races in this
> respect.  Freedom from cross-process races relies on using the pid in the
> filename too.



From tim.one at home.com  Tue Mar 13 04:40:28 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 22:40:28 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com>

[Guido, to David Ascher]
> ...
> One thing we *could* agree to there, after I pressed some people: 1/2
> should return 0.5.

FWIW, in a show of hands at the devday session after you left, an obvious
majority said they did object to that 1/2 is 0 today.  This was bold in the
face of Paul Dubois's decibel-rich opposition <wink>.  There was no consensus
on what it *should* do instead, though.

> Possibly 1/2 should not be a binary floating point number -- but then
> 0.5 shouldn't either, and whatever happens, these (1/2 and 0.5) should
> have the same type, be it rational, binary float, or decimal float.

I don't know that imposing this formal simplicity is going to be a genuine
help, because the area it's addressing is inherently complex.  In such cases,
simplicity is bought at the cost of trying to wish away messy realities.
You're aiming for Python arithmetic that's about 5x simpler than Python
strings <0.7 wink>.

It rules out rationals because you already know how insisting on this rule
worked out in ABC (it didn't).

It rules out decimal floats because scientific users can't tolerate the
inefficiency of simulating arithmetic in software (software fp is at best
~10x slower than native fp, assuming expertly hand-optimized assembler
exploiting platform HW tricks), and aren't going to agree to stick physical
constants in strings to pass to some "BinaryFloat()" constructor.

That only leaves native HW floating-point, but you already know *that*
doesn't work for newbies either.

Presumably ABC used rationals because usability studies showed they worked
best (or didn't they test this?).  Presumably the TeachScheme! dialect of
Scheme uses rationals for the same reason.  Curiously, the latter behaves
differently depending on "language level":

> (define x (/ 2 3))
> x
2/3
> (+ x 0.5)
1.1666666666666665
>

That's what you get under the "Full Scheme" setting.  Under all other
settings (Beginning, Intermediate, and Advanced Student), you get this
instead:

> (define x (/ 2 3))
> x
2/3
> (+ x 0.5)
7/6
>

In those you have to tag 0.5 as being inexact in order to avoid having it
treated as ABC did (i.e., as an exact decimal rational):

> (+ x #i0.5)
#i1.1666666666666665
>

> (- (* .58 100) 58)   ; showing that .58 is treated as exact
0
> (- (* #i.58 100) 58) ; same IEEE result as Python when .58 tagged w/ #i
#i-7.105427357601002e-015
>

So that's their conclusion:  exact rationals are best for students at all
levels (apparently the same conclusion reached by ABC), but when you get to
the real world rationals are no longer a suitable meaning for fp literals
(apparently the same conclusion *I* reached from using ABC; 1/10 and 0.1 are
indeed very different beasts to me).

A hard question:  what if they're right?  That is, that you have to favor one
of newbies or experienced users at the cost of genuine harm to the other?




From aycock at csc.UVic.CA  Tue Mar 13 04:32:54 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Mon, 12 Mar 2001 19:32:54 -0800
Subject: [Python-Dev] Re: [kbuild-devel] Re: CML2 compiler slowness
Message-ID: <200103130332.TAA17222@valdes.csc.UVic.CA>

Eric the Poet <esr at thyrsus.com> writes:
| To paraphrase a famous line from E.E. "Doc" Smith, "I could eat a handful
| of chad and *puke* a faster parser than that..."

Indeed.  Very colorful.

I'm sending you the in-development version of SPARK in a separate
message.

John



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 13 07:06:13 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 13 Mar 2001 07:06:13 +0100
Subject: [Python-Dev] more Solaris extension grief
Message-ID: <200103130606.f2D66D803507@mira.informatik.hu-berlin.de>

gcc -shared  ./PyEnforcer.o  -L/home/gvwilson/cozumel/merlot/enforcer
-lenforcer -lopenssl -lstdc++  -o ./PyEnforcer.so

> Text relocation remains                         referenced
>    against symbol                  offset      in file
> istream type_info function          0x1c
> /usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
> istream type_info function          0x18

> Has anyone seen this problem before?

Yes, there have been a number of SF bug reports on that, and proposals
to fix that. It's partly a policy issue, but I believe all these
patches have been wrong, as the problem is not in Python.

When you build a shared library, it ought to be
position-independent. If it is not, the linker will need to put
relocation instructions into the text segment, which means that the
text segment has to be writable. In turn, the text of the shared
library will not be demand-paged anymore, but copied into main memory
when the shared library is loaded. Therefore, gcc asks ld to issue an
error if non-PIC code is integrated into a shared object.

To have the compiler emit position-independent code, you need to pass
the -fPIC option when producing object files. You not only need to do
that for your own object files, but for the object files of all the
static libraries you are linking with. In your case, the static
library is libstdc++.a.

Please note that linking libstdc++.a statically not only means that
you lose position-independence; it also means that you end up with a
copy of libstdc++.a in each extension module that you link with it.
In turn, global objects defined in the library may be constructed
twice (I believe).

There are a number of solutions:

a) Build libstdc++ as a  shared library. This is done on Linux, so
   you don't get the error on Linux.

b) Build libstdc++.a using -fPIC. The gcc build process does not
   support such a configuration, so you'ld need to arrange that
   yourself.

c) Pass the -mimpure-text option to gcc when linking. That will make
   the text segment writable, and silence the linker.

There was one proposal that looks like it would work, but doesn't:

d) Instead of linking with -shared, link with -G. That forgets to link
   the shared library startup files (crtbeginS/crtendS) into the shared
   library, which in turn means that constructors of global objects will
   fail to work; it also does a number of other things incorrect.

Regards,
Martin



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 13 07:12:41 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 13 Mar 2001 07:12:41 +0100
Subject: [Python-Dev] CML2 compiler slowness
Message-ID: <200103130612.f2D6Cfa03574@mira.informatik.hu-berlin.de>

> Anybody over on the Python side know of a faster alternative LL or
> LR(1) parser generator or factory class?

I'm using Yapps (http://theory.stanford.edu/~amitp/Yapps/), and find
it quite convenient, and also sufficiently fast (it gives, together
with sre, a factor of two or three over a flex/bison solution of XPath
parsing). I've been using my own lexer (using sre), both to improve
speed and to deal with the subtleties (sp?) of XPath tokenization.  If
you can send me the grammar and some sample sentences, I can help
writing a Yapps parser (as I think Yapps is an under-used kit).

Again, this question is probably better asked on python-list than
python-dev...

Regards,
Martin



From trentm at ActiveState.com  Tue Mar 13 07:56:12 2001
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 12 Mar 2001 22:56:12 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:19:39PM -0500
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <20010312225612.H8460@ActiveState.com>

I just want to add that one of the main participants in the Numeric Coercion
session was Paul Dubois and I am not sure that he is on python-dev. He should
probably be in this dicussion.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From guido at digicool.com  Tue Mar 13 10:58:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 04:58:32 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Include frameobject.h,2.30,2.31
In-Reply-To: Your message of "Tue, 13 Mar 2001 03:23:02 +0100."
             <20010313032302.W404@xs4all.nl> 
References: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>  
            <20010313032302.W404@xs4all.nl> 
Message-ID: <200103130958.EAA29951@cj20424-a.reston1.va.home.com>

> On Mon, Mar 12, 2001 at 05:58:23PM -0800, Jeremy Hylton wrote:
> > Modified Files:
> > 	frameobject.h 
> > Log Message:
> 
> > There is also a C API change: PyFrame_New() is reverting to its
> > pre-2.1 signature.  The change introduced by nested scopes was a
> > mistake.  XXX Is this okay between beta releases?
> 
> It is definately fine by me ;-) And Guido's reason for not caring about it
> breaking ("noone uses it") applies equally well to unbreaking it between
> beta releases.

This is a good thing!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 13 11:18:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 05:18:35 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Mon, 12 Mar 2001 22:40:28 EST."
             <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> 
Message-ID: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>

> [Guido, to David Ascher]
> > ...
> > One thing we *could* agree to there, after I pressed some people: 1/2
> > should return 0.5.
> 
> FWIW, in a show of hands at the devday session after you left, an obvious
> majority said they did object to that 1/2 is 0 today.  This was bold in the
> face of Paul Dubois's decibel-rich opposition <wink>.  There was no consensus
> on what it *should* do instead, though.
> 
> > Possibly 1/2 should not be a binary floating point number -- but then
> > 0.5 shouldn't either, and whatever happens, these (1/2 and 0.5) should
> > have the same type, be it rational, binary float, or decimal float.
> 
> I don't know that imposing this formal simplicity is going to be a genuine
> help, because the area it's addressing is inherently complex.  In such cases,
> simplicity is bought at the cost of trying to wish away messy realities.
> You're aiming for Python arithmetic that's about 5x simpler than Python
> strings <0.7 wink>.
> 
> It rules out rationals because you already know how insisting on this rule
> worked out in ABC (it didn't).
> 
> It rules out decimal floats because scientific users can't tolerate the
> inefficiency of simulating arithmetic in software (software fp is at best
> ~10x slower than native fp, assuming expertly hand-optimized assembler
> exploiting platform HW tricks), and aren't going to agree to stick physical
> constants in strings to pass to some "BinaryFloat()" constructor.
> 
> That only leaves native HW floating-point, but you already know *that*
> doesn't work for newbies either.

I'd like to argue about that.  I think the extent to which HWFP
doesn't work for newbies is mostly related to the change we made in
2.0 where repr() (and hence the interactive prompt) show full
precision, leading to annoyances like repr(1.1) == '1.1000000000000001'.

I've noticed that the number of complaints I see about this went way
up after 2.0 was released.

I expect that most newbies don't use floating point in a fancy way,
and would never notice it if it was slightly off as long as the output
was rounded like it was before 2.0.

> Presumably ABC used rationals because usability studies showed they worked
> best (or didn't they test this?).

No, I think at best the usability studies showed that floating point
had problems that the ABC authors weren't able to clearly explain to
newbies.  There was never an experiment comparing FP to rationals.

> Presumably the TeachScheme! dialect of
> Scheme uses rationals for the same reason.

Probably for the same reasons.

> Curiously, the latter behaves
> differently depending on "language level":
> 
> > (define x (/ 2 3))
> > x
> 2/3
> > (+ x 0.5)
> 1.1666666666666665
> >
> 
> That's what you get under the "Full Scheme" setting.  Under all other
> settings (Beginning, Intermediate, and Advanced Student), you get this
> instead:
> 
> > (define x (/ 2 3))
> > x
> 2/3
> > (+ x 0.5)
> 7/6
> >
> 
> In those you have to tag 0.5 as being inexact in order to avoid having it
> treated as ABC did (i.e., as an exact decimal rational):
> 
> > (+ x #i0.5)
> #i1.1666666666666665
> >
> 
> > (- (* .58 100) 58)   ; showing that .58 is treated as exact
> 0
> > (- (* #i.58 100) 58) ; same IEEE result as Python when .58 tagged w/ #i
> #i-7.105427357601002e-015
> >
> 
> So that's their conclusion:  exact rationals are best for students at all
> levels (apparently the same conclusion reached by ABC), but when you get to
> the real world rationals are no longer a suitable meaning for fp literals
> (apparently the same conclusion *I* reached from using ABC; 1/10 and 0.1 are
> indeed very different beasts to me).

Another hard question: does that mean that 1 and 1.0 are also very
different beasts to you?  They weren't to the Alice users who started
this by expecting 1/4 to represent a quarter turn.

> A hard question:  what if they're right?  That is, that you have to favor one
> of newbies or experienced users at the cost of genuine harm to the other?

You know where I'm leaning...  I don't know that newbies are genuinely
hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
that it prints 1.1, and be happy; the persistent ones will try
1.1**2-1.21, ask for an explanation, and get a introduction to
floating point.  This *doesnt'* have to explain all the details, just
the two facts that you can lose precision and that 1.1 isn't
representable exactly in binary.  Only the latter should be new to
them.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Tue Mar 13 12:45:21 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Tue, 13 Mar 2001 03:45:21 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <3AAE0851.3B683941@ActiveState.com>

Guido van Rossum wrote:
> 
>...
> 
> You know where I'm leaning...  I don't know that newbies are genuinely
> hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
> that it prints 1.1, and be happy; the persistent ones will try
> 1.1**2-1.21, ask for an explanation, and get a introduction to
> floating point.  This *doesnt'* have to explain all the details, just
> the two facts that you can lose precision and that 1.1 isn't
> representable exactly in binary.  Only the latter should be new to
> them.

David Ascher suggested during the talk that comparisons of floats could
raise a warning unless you turned that warning off (which only
knowledgable people would do). I think that would go a long way to
helping them find and deal with serious floating point inaccuracies in
their code.

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From guido at digicool.com  Tue Mar 13 12:42:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 06:42:35 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Tue, 13 Mar 2001 03:45:21 PST."
             <3AAE0851.3B683941@ActiveState.com> 
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>  
            <3AAE0851.3B683941@ActiveState.com> 
Message-ID: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>

[me]
> > You know where I'm leaning...  I don't know that newbies are genuinely
> > hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
> > that it prints 1.1, and be happy; the persistent ones will try
> > 1.1**2-1.21, ask for an explanation, and get a introduction to
> > floating point.  This *doesnt'* have to explain all the details, just
> > the two facts that you can lose precision and that 1.1 isn't
> > representable exactly in binary.  Only the latter should be new to
> > them.

[Paul]
> David Ascher suggested during the talk that comparisons of floats could
> raise a warning unless you turned that warning off (which only
> knowledgable people would do). I think that would go a long way to
> helping them find and deal with serious floating point inaccuracies in
> their code.

You mean only for == and !=, right?  This could easily be implemented
now that we have rich comparisons.  We should wait until 2.2 though --
we haven't clearly decided that this is the way we want to go.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Tue Mar 13 12:54:19 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 12:54:19 +0100
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Tue, Mar 13, 2001 at 05:18:35AM -0500
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <20010313125418.A404@xs4all.nl>

On Tue, Mar 13, 2001 at 05:18:35AM -0500, Guido van Rossum wrote:

> I think the extent to which HWFP doesn't work for newbies is mostly
> related to the change we made in 2.0 where repr() (and hence the
> interactive prompt) show full precision, leading to annoyances like
> repr(1.1) == '1.1000000000000001'.
> 
> I've noticed that the number of complaints I see about this went way up
> after 2.0 was released.
> 
> I expect that most newbies don't use floating point in a fancy way, and
> would never notice it if it was slightly off as long as the output was
> rounded like it was before 2.0.

I suspect that the change in float.__repr__() did reduce the number of
suprises over something like this, though: (taken from a 1.5.2 interpreter)

>>> x = 1.000000000001
>>> x
1.0
>>> x == 1.0
0

If we go for the HWFP + loosened precision in printing you seem to prefer,
we should be concious about this, possibly raising a warning when comparing
floats in this way. (Or in any way at all ? Given that when you compare two
floats, you either didn't intend to, or your name is Tim or Moshe and you
would be just as happy writing the IEEE754 binary representation directly :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tismer at tismer.com  Tue Mar 13 14:29:53 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 14:29:53 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAE20D1.5D375ECB@tismer.com>

Ok, I'm adding some comments.

Jeremy Hylton wrote:
> 
> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome message:
> There is a tension between Stackless Python on the one hand and making
> Python easy to embed in and extend with C programs on the other hand.
> The PEP describes this as the major difficulty with C Python.  I won't
> repeat the discussion of the problem there.
> 
> I would like to seem a somewhat more detailed discussion of this in
> the PEP.  I think it's an important issue to work out before making a
> decision about a stack-light patch.
> 
> The problem of nested interpreters and the C API seems to come up in
> several ways.  These are all touched on in the PEP, but not in much
> detail.  This message is mostly a request for more detail :-).
> 
>   - Stackless disallows transfer out of a nested interpreter.  (It
>     has, too; anything else would be insane.)  Therefore, the
>     specification for microthreads &c. will be complicated by a
>     listing of the places where control transfers are not possible.

To be more precise: Stackless catches any attempt to transfer to a
frame that has been locked (is run) by an interpreter that is not
the topmost of the C stack. That's all. You might even run Microthreads
in the fifth interpreter recursion, and later return to other
(stalled) microthreads, if only this condition is met.

>     The PEP says this is not ideal, but not crippling.  I'd like to
>     see an actual spec for where it's not allowed in pure Python.  It
>     may not be crippling, but it may be a tremendous nuisance in
>     practice; e.g. remember that __init__ calls create a critical
>     section.

At the moment, *all* of the __xxx__ methods are restricted to stack-
like behavior. __init__ and __getitem__ should probably be the first
methods beyond Stack-lite, which should get extra treatment.

>   - If an application makes use of C extensions that do create nested
>     interpreters, they will make it even harder to figure out when
>     Python code is executing in a nested interpreter.  For a large
>     systems with several C extensions, this could be complicated.  I
>     presume, therefore, that there will be a C API for playing nice
>     with stackless.  I'd like to see a PEP that discusses what this C
>     API would look like.

Ok. I see the need for an interface for frames here.
An extension should be able to create a frame, together with
necessary local memory.
It appears to need two or three functions in the extension:
1) Preparation phase
   The extension provides an "interpreter" function which is in
   charge to handle this frame. The preparation phase puts a
   pointer to this function into the frame.
2) Execution phase
   The frame is run by the frame dispatcher, which calls the
   interpreter function.
   For every nested call into Python, the interpreter function
   needs to return with a special signal for the scheduler,
   that there is now a different frame to be scheduled.
   These notifications, and midifying the frame chain, should
   be hidden by API calls.
3) cleanup phase (necessary?)
   A finalization function may be (optionally) provided for
   the frame destructor.

>   - Would allow of the internal Python calls that create nested
>     functions be replaced?  I'm thinking of things like
>     PySequence_Fast() and the ternary_op() call in abstract.c.  How
>     hard will it be to convert all these functions to be stackless?

PySequence_Fast() calls back into PySequence_Tuple(). In the generic
sequence case, it calls 
       PyObject *item = (*m->sq_item)(v, i);

This call may now need to return to the frame dispatcher without
having its work done. But we cannot do this, because the current
API guarantees that this method will return either with a result
or an exception. This means, we can of course modify the interpreter
to deal with a third kind of state, but this would probably break
some existing extensions.
It was the reason why I didn't try to go further here: Whatever
is exposed to other code but Python itself might break by such
an extension, unless we find a way to distinguish *who* calls.
On the other hand, if we are really at a new Python,
incompatibility would be just ok, and the problem would vanish.

>     How many functions are affected?  And how many places are they
>     called from?

This needs more investigation.

>   - What is the performance impact of adding the stackless patches?  I
>     think Christian mentioned a 10% slowdown at dev day, which doesn't
>     sound unreasonable.  Will reworking the entire interpreter to be
>     stackless make that slowdown larger or smaller?

No, it is about 5 percent. My optimization gains about 15 percent,
which makes a win of 10 percent overall.
The speed loss seems to be related to extra initialization calls
for frames, and the somewhat more difficult parameter protocol.
The fact that recusions are turned into repetitive calls from
a scheduler seems to have no impact. In other words: Further
"stackless" versions of internal functions will probably not
produce another slowdown.
This matches the observation that the number of function calls
is nearly the same, whether recursion is used or stackless.
It is mainly the order of function calls that is changed.

> One other set of issues, that is sort-of out of bounds for this
> particular PEP, is what control features do we want that can only be
> implemented with stackless.  Can we implement generators or coroutines
> efficiently without a stackless approach?

For some limitated view of generators: Yes, absolutely. *)
For coroutines: For sure not.

*) generators which live in the context of the calling
function, like the stack-based generator implementation of
one of the first ICON implementations, I think.
That is, these generators cannot be re-used somewhere else.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From uche.ogbuji at fourthought.com  Tue Mar 13 15:47:17 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Tue, 13 Mar 2001 07:47:17 -0700
Subject: [Python-Dev] comments on PEP 219 
In-Reply-To: Message from Jeremy Hylton <jeremy@alum.mit.edu> 
   of "Mon, 12 Mar 2001 19:14:47 EST." <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103131447.HAA32016@localhost.localdomain>

> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome message:
> There is a tension between Stackless Python on the one hand and making
> Python easy to embed in and extend with C programs on the other hand.
> The PEP describes this as the major difficulty with C Python.  I won't
> repeat the discussion of the problem there.

You know, even though I would like to have some of the Stackless features, my 
skeptical reaction to some of the other Grand Ideas circulating at IPC9, 
including static types leads me to think I might not be thinking clearly on 
the Stackless question.

I think that if there is no way to address the many important concerns raised 
by people at the Stackless session (minus the "easy to learn" argument IMO), 
Stackless is probably a bad idea to shove into Python.

I still think that the Stackless execution structure would be a huge 
performance boost in many XML processing tasks, but that's not worth making 
Python intractable for extension writers.

Maybe it's not so bad for Stackless to remain a branch, given how closely 
Christian can work with Pythonlabs.  The main problem is the load on 
Christian, which would be mitigated as he gained collaborators.  The other 
problem would be that interested extension writers might need to maintain 2 
code-bases as well.  Maybe one could develop some sort of adaptor.

Or maybe Stackless should move to core, but only in P3K in which extension 
writers should be expecting weird and wonderful new models, anyway (right?)


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From tismer at tismer.com  Tue Mar 13 16:12:03 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 16:12:03 +0100
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
References: <200103131447.HAA32016@localhost.localdomain>
Message-ID: <3AAE38C3.2C9BAA08@tismer.com>


Uche Ogbuji wrote:
> 
> > Here are some comments on Gordon's new draft of PEP 219 and the
> > stackless dev day discussion at Spam 9.
> >
> > I left the dev day discussion with the following takehome message:
> > There is a tension between Stackless Python on the one hand and making
> > Python easy to embed in and extend with C programs on the other hand.
> > The PEP describes this as the major difficulty with C Python.  I won't
> > repeat the discussion of the problem there.
> 
> You know, even though I would like to have some of the Stackless features, my
> skeptical reaction to some of the other Grand Ideas circulating at IPC9,
> including static types leads me to think I might not be thinking clearly on
> the Stackless question.
> 
> I think that if there is no way to address the many important concerns raised
> by people at the Stackless session (minus the "easy to learn" argument IMO),
> Stackless is probably a bad idea to shove into Python.

Maybe I'm repeating myself, but I'd like to clarify:
I do not plan to introduce anything that forces anybody to change
her code. This is all about extending the current capabilities.

> I still think that the Stackless execution structure would be a huge
> performance boost in many XML processing tasks, but that's not worth making
> Python intractable for extension writers.

Extension writers only have to think about the Stackless
protocol (to be defined) if they want to play the Stackless
game. If this is not intended, this isn't all that bad. It only means
that they cannot switch a microthread while the extension does
a callback.
But that is all the same as today. So how could Stackless make
extensions intractable, unless someone *wants* to get get all of it?

An XML processor in C will not take advantage form Stackless unless
it is desinged for that. But nobody enforces this. Stackless can
behave as recursive as standard Python, and it is completely aware
about recursions. It will not break.

It is the programmers choice to make a switchable extension
or not. This is just more than today to choose.

> Maybe it's not so bad for Stackless to remain a branch, given how closely
> Christian can work with Pythonlabs.  The main problem is the load on
> Christian, which would be mitigated as he gained collaborators.  The other
> problem would be that interested extension writers might need to maintain 2
> code-bases as well.  Maybe one could develop some sort of adaptor.
> 
> Or maybe Stackless should move to core, but only in P3K in which extension
> writers should be expecting weird and wonderful new models, anyway (right?)

That's no alternative. Remember Guido's words:
P3K will never become reality. It is a virtual
place where to put all the things that might happen in some future.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From esr at snark.thyrsus.com  Tue Mar 13 16:32:51 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Tue, 13 Mar 2001 10:32:51 -0500
Subject: [Python-Dev] CML2 compiler speedup
Message-ID: <200103131532.f2DFWpw04691@snark.thyrsus.com>

I bit the bullet and hand-rolled a recursive-descent expression parser
for CML2 to replace the Earley-algorithm parser described in my
previous note.  It is a little more than twice as fast as the SPARK
code, cutting the CML2 compiler runtime almost exactly in half.

Sigh.  I had been intending to recommend SPARK for the Python standard
library -- as I pointed out in my PC9 paper, it would be the last
piece stock Python needs to be an effective workbench for
minilanguage construction.  Unfortunately I'm now convinced Paul
Prescod is right and it's too slow for production use, at least at
version 0.6.1.  

John Aycock says 0.7 will be substantially faster; I'll keep an eye on
this.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

The price of liberty is, always has been, and always will be blood.  The person
who is not willing to die for his liberty has already lost it to the first
scoundrel who is willing to risk dying to violate that person's liberty.  Are
you free? 
	-- Andrew Ford



From moshez at zadka.site.co.il  Tue Mar 13 07:20:47 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Tue, 13 Mar 2001 08:20:47 +0200
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
Message-ID: <E14ciAp-0005dJ-00@darjeeling>

After discussions in IPC9 one of the decisions was to set up a mailing
list for discussion of the numeric model of Python.

Subscribe here:

    http://lists.sourceforge.net/lists/listinfo/python-numerics

Or here:

    python-numerics-request at lists.sourceforge.net

I will post my PEPs there as soon as an initial checkin is completed.
Please direct all further numeric model discussion there.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From paul at pfdubois.com  Tue Mar 13 17:38:35 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Tue, 13 Mar 2001 08:38:35 -0800
Subject: [Python-Dev] Kinds
Message-ID: <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com>

I was asked to write down what I said at the dev day session about kinds. I
have put this in the form of a proposal-like writeup which is attached. I
hope this helps you undestand what I meant.

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: kinds.txt
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010313/9f16e7f4/attachment.txt>

From guido at digicool.com  Tue Mar 13 17:43:42 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 11:43:42 -0500
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: Your message of "Tue, 06 Mar 2001 07:51:49 CST."
             <15012.60277.150431.237935@beluga.mojam.com> 
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>  
            <15012.60277.150431.237935@beluga.mojam.com> 
Message-ID: <200103131643.LAA01072@cj20424-a.reston1.va.home.com>

> Two things come to mind.  One, perhaps a more careful coding of urllib to
> avoid exposing names it shouldn't export would be a better choice.  Two,
> perhaps those symbols that are not documented but that would be useful when
> extending urllib functionality should be documented and added to __all__.
> 
> Here are the non-module names I didn't include in urllib.__all__:

Let me annotate these in-line:

>     MAXFTPCACHE			No
>     localhost				Yes
>     thishost				Yes
>     ftperrors				Yes
>     noheaders				No
>     ftpwrapper			No
>     addbase				No
>     addclosehook			No
>     addinfo				No
>     addinfourl			No
>     basejoin				Yes
>     toBytes				No
>     unwrap				Yes
>     splittype				Yes
>     splithost				Yes
>     splituser				Yes
>     splitpasswd			Yes
>     splitport				Yes
>     splitnport			Yes
>     splitquery			Yes
>     splittag				Yes
>     splitattr				Yes
>     splitvalue			Yes
>     splitgophertype			Yes
>     always_safe			No
>     getproxies_environment		No
>     getproxies			Yes
>     getproxies_registry		No
>     test1				No
>     reporthook			No
>     test				No
>     main				No
> 
> None are documented, so there are no guarantees if you use them (I have
> subclassed addinfourl in the past myself).

Note that there's a comment block "documenting" all the split*()
functions, indicating that I intended them to be public.  For the
rest, I'm making a best guess based on how useful these things are and
how closely tied to the implementation etc.

--Guido van Rossum (home page: http://www.python.org/~guido/)




From jeremy at alum.mit.edu  Tue Mar 13 03:42:20 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 21:42:20 -0500 (EST)
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
In-Reply-To: <3AAE38C3.2C9BAA08@tismer.com>
References: <200103131447.HAA32016@localhost.localdomain>
	<3AAE38C3.2C9BAA08@tismer.com>
Message-ID: <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "CT" == Christian Tismer <tismer at tismer.com> writes:

  CT> Maybe I'm repeating myself, but I'd like to clarify: I do not
  CT> plan to introduce anything that forces anybody to change her
  CT> code. This is all about extending the current capabilities.

The problem with this position is that C code that uses the old APIs
interferes in odd ways with features that depend on stackless,
e.g. the __xxx__ methods.[*]  If the old APIs work but are not
compatible, we'll end up having to rewrite all our extensions so that
they play nicely with stackless.

If we change the core and standard extensions to use stackless
interfaces, then this style will become the standard style.  If the
interface is simple, this is no problem.  If the interface is complex,
it may be a problem.  My point is that if we change the core APIs, we
place a new burden on extension writers.

Jeremy

    [*] If we fix the type-class dichotomy, will it have any effect on
    the stackful nature of some of these C calls?



From jeremy at alum.mit.edu  Tue Mar 13 03:47:41 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 21:47:41 -0500 (EST)
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: <E14ciAp-0005dJ-00@darjeeling>
References: <E14ciAp-0005dJ-00@darjeeling>
Message-ID: <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>

We've spun off a lot of new lists recently.  I don't particularly care
for this approach, because I sometimes feel like I spend more time
subscribing to new lists than I do actually reading them <0.8 wink>.

I assume that most people are relieved to have the traffic taken off
python-dev.  (I can't think of any other reason to create a separate
list.)  But what's the well-informed Python hacker to do?  Subscribe
to dozens of different lists to discuss each different feature?

A possible solution: python-dev-all at python.org.  This list would be
subscribed to each of the special topic mailing lists.  People could
subscribe to it to get all of the mail without having to individually
subscribe to all the sublists.  Would this work?

Jeremy



From barry at digicool.com  Tue Mar 13 18:12:19 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Tue, 13 Mar 2001 12:12:19 -0500
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
References: <E14ciAp-0005dJ-00@darjeeling>
	<15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15022.21747.94249.599599@anthem.wooz.org>

There was some discussions at IPC9 about implementing `topics' in
Mailman which I think would solve this problem nicely.  I don't have
time to go into much details now, and it's definitely a medium-term
solution (since other work is taking priority right now).

-Barry



From aycock at csc.UVic.CA  Tue Mar 13 17:54:48 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Tue, 13 Mar 2001 08:54:48 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <200103131654.IAA22731@valdes.csc.UVic.CA>

| From paulp at ActiveState.com Mon Mar 12 18:39:28 2001
| Is there a tutorial about how to make fast Spark grammars or should I go
| back and re-read my compiler construction books?

My advice would be to avoid heavy use of obviously ambiguous
constructions, like defining expressions to be
	E ::= E op E

Aside from that, the whole point of SPARK is to have the language you're
implementing up and running, fast -- even if you don't have a lot of
background in compiler theory.  It's not intended to spit out blazingly
fast production compilers.  If the result isn't fast enough for your
purposes, then you can replace SPARK components with faster ones; you're
not locked in to using the whole package.  Or, if you're patient, you can
wait for the tool to improve :-)

John



From gmcm at hypernet.com  Tue Mar 13 18:17:39 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 12:17:39 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAE0FE3.2206.7AB85588@localhost>

[Jeremy]
> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome
> message: There is a tension between Stackless Python on the one
> hand and making Python easy to embed in and extend with C
> programs on the other hand. The PEP describes this as the major
> difficulty with C Python.  I won't repeat the discussion of the
> problem there.

Almost all of the discussion about interpreter recursions is 
about completeness, *not* about usability. If you were to 
examine all the Stackless using apps out there, I think you 
would find that they rely on a stackless version of only one 
builtin - apply().

I can think of 2 practical situations in which it would be *nice* 
to be rid of the recursion:

 - magic methods (__init__, __getitem__ and __getattr__ in 
particular). But magic methods are a convenience. There's 
absolutely nothing there that can't be done another way.

 - a GUI. Again, no big deal, because GUIs impose all kinds of 
restrictions to begin with. If you use a GUI with threads, you 
almost always have to dedicate one thread (usually the main 
one) to the GUI and be careful that the other threads don't 
touch the GUI directly. It's basically the same issue with 
Stackless.
 
As for the rest of the possible situations, demand is 
nonexistant. In an ideal world, we'd never have to answer the 
question "how come it didn't work?". But put on you 
application programmers hat for a moment and see if you can 
think of a legitimate reason for, eg, one of the objects in an 
__add__ wanting to make use of a pre-existing coroutine 
inside the __add__ call. [Yeah, Tm can come up with a 
reason, but I did say "legitimate".]

> I would like to seem a somewhat more detailed discussion of this
> in the PEP.  I think it's an important issue to work out before
> making a decision about a stack-light patch.

I'm not sure why you say that. The one comparable situation 
in normal Python is crossing threads in callbacks. With the 
exception of a couple of complete madmen (doing COM 
support), everyone else learns to avoid the situation. [Mark 
doesn't even claim to know *how* he solved the problem 
<wink>].
 
> The problem of nested interpreters and the C API seems to come up
> in several ways.  These are all touched on in the PEP, but not in
> much detail.  This message is mostly a request for more detail
> :-).
> 
>   - Stackless disallows transfer out of a nested interpreter. 
>   (It
>     has, too; anything else would be insane.)  Therefore, the
>     specification for microthreads &c. will be complicated by a
>     listing of the places where control transfers are not
>     possible. The PEP says this is not ideal, but not crippling. 
>     I'd like to see an actual spec for where it's not allowed in
>     pure Python.  It may not be crippling, but it may be a
>     tremendous nuisance in practice; e.g. remember that __init__
>     calls create a critical section.

The one instance I can find on the Stackless list (of 
attempting to use a continuation across interpreter 
invocations) was a call the uthread.wait() in __init__. Arguably 
a (minor) nuisance, arguably bad coding practice (even if it 
worked).

I encountered it when trying to make a generator work with a 
for loop. So you end up using a while loop <shrug>.

It's disallowed where ever it's not accomodated. Listing those 
cases is probably not terribly helpful; I bet even Guido is 
sometimes surprised at what actually happens under the 
covers. The message "attempt to run a locked frame" is not 
very meaningful to the Stackless newbie, however.
 
[Christian answered the others...]


- Gordon



From DavidA at ActiveState.com  Tue Mar 13 18:25:49 2001
From: DavidA at ActiveState.com (David Ascher)
Date: Tue, 13 Mar 2001 09:25:49 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>
Message-ID: <PLEJJNOHDIGGLDPOGPJJEEPNCNAA.DavidA@ActiveState.com>

GvR:

> [Paul]
> > David Ascher suggested during the talk that comparisons of floats could
> > raise a warning unless you turned that warning off (which only
> > knowledgable people would do). I think that would go a long way to
> > helping them find and deal with serious floating point inaccuracies in
> > their code.
>
> You mean only for == and !=, right?

Right.

> We should wait until 2.2 though --
> we haven't clearly decided that this is the way we want to go.

Sure.  It was just a suggestion for a way to address the inherent problems
in having newbies work w/ FP (where newbie in this case is 99.9% of the
programming population, IMO).

-david




From thomas at xs4all.net  Tue Mar 13 19:08:05 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 19:08:05 +0100
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Mar 12, 2001 at 09:47:41PM -0500
References: <E14ciAp-0005dJ-00@darjeeling> <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <20010313190805.C404@xs4all.nl>

On Mon, Mar 12, 2001 at 09:47:41PM -0500, Jeremy Hylton wrote:

> We've spun off a lot of new lists recently.  I don't particularly care
> for this approach, because I sometimes feel like I spend more time
> subscribing to new lists than I do actually reading them <0.8 wink>.

And even if they are seperate lists, people keep crossposting, completely
negating the idea behind seperate lists. ;P I think the main reason for
separate lists is to allow non-python-dev-ers easy access to the lists. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Tue Mar 13 19:29:56 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 13:29:56 -0500
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
In-Reply-To: <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE38C3.2C9BAA08@tismer.com>
Message-ID: <3AAE20D4.25660.7AFA8206@localhost>

> >>>>> "CT" == Christian Tismer <tismer at tismer.com> writes:
> 
>   CT> Maybe I'm repeating myself, but I'd like to clarify: I do
>   not CT> plan to introduce anything that forces anybody to
>   change her CT> code. This is all about extending the current
>   capabilities.

[Jeremy] 
> The problem with this position is that C code that uses the old
> APIs interferes in odd ways with features that depend on
> stackless, e.g. the __xxx__ methods.[*]  If the old APIs work but
> are not compatible, we'll end up having to rewrite all our
> extensions so that they play nicely with stackless.

I don't understand. Python code calls C extension. C 
extension calls Python callback which tries to use a pre-
existing coroutine. How is the "interference"? The callback 
exists only because the C extension has an API that uses 
callbacks. 

Well, OK, the callback doesn't have to be explicit. The C can 
go fumbling around in a passed in object and find something 
callable. But to call it "interference", I think you'd have to have 
a working program which stopped working when a C extension 
crept into it without the programmer noticing <wink>.

> If we change the core and standard extensions to use stackless
> interfaces, then this style will become the standard style.  If
> the interface is simple, this is no problem.  If the interface is
> complex, it may be a problem.  My point is that if we change the
> core APIs, we place a new burden on extension writers.

This is all *way* out of scope, but if you go the route of 
creating a pseudo-frame for the C code, it seems quite 
possible that the interface wouldn't have to change at all. We 
don't need any more args into PyEval_EvalCode. We don't 
need any more results out of it. Christian's stackless map 
implementation is proof-of-concept that you can do this stuff.

The issue (if and when we get around to "truly and completely 
stackless") is complexity for the Python internals 
programmer, not your typical object-wrapping / SWIG-swilling 
extension writer.


> Jeremy
> 
>     [*] If we fix the type-class dichotomy, will it have any
>     effect on the stackful nature of some of these C calls?

Don't know. What will those calls look like <wink>?

- Gordon



From jeremy at alum.mit.edu  Tue Mar 13 19:30:37 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 13:30:37 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <20010313185501.A7459@planck.physik.uni-konstanz.de>
References: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
	<3AAE0FE3.2206.7AB85588@localhost>
	<20010313185501.A7459@planck.physik.uni-konstanz.de>
Message-ID: <15022.26445.896017.406266@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "BR" == Bernd Rinn <Bernd.Rinn at epost.de> writes:

  BR> On Tue, Mar 13, 2001 at 12:17:39PM -0500, Gordon McMillan wrote:
  >> The one instance I can find on the Stackless list (of attempting
  >> to use a continuation across interpreter invocations) was a call
  >> the uthread.wait() in __init__. Arguably a (minor) nuisance,
  >> arguably bad coding practice (even if it worked).

[explanation of code practice that lead to error omitted]

  BR> So I suspect that you might end up with a rule of thumb:

  BR> """ Don't use classes and libraries that use classes when doing
  BR> IO in microthreaded programs!  """

  BR> which might indeed be a problem. Am I overlooking something
  BR> fundamental here?

Thanks for asking this question in a clear and direct way.

A few other variations on the question come to mind:

    If a programmer uses a library implement via coroutines, can she
    call library methods from an __xxx__ method?

    Can coroutines or microthreads co-exist with callbacks invoked by
    C extensions? 

    Can a program do any microthread IO in an __call__ method?

If any of these are the sort "in theory" problems that the PEP alludes
to, then we need a full spec for what is and is not allowed.  It
doesn't make sense to tell programmers to follow unspecified
"reasonable" programming practices.

Jeremy



From ping at lfw.org  Tue Mar 13 19:44:37 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 13 Mar 2001 10:44:37 -0800 (PST)
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <20010313125418.A404@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10103131039260.13108-100000@skuld.kingmanhall.org>

On Tue, Mar 13, 2001 at 05:18:35AM -0500, Guido van Rossum wrote:
> I think the extent to which HWFP doesn't work for newbies is mostly
> related to the change we made in 2.0 where repr() (and hence the
> interactive prompt) show full precision, leading to annoyances like
> repr(1.1) == '1.1000000000000001'.

I'll argue now -- just as i argued back then, but louder! -- that
this isn't necessary.  repr(1.1) can be 1.1 without losing any precision.

Simply stated, you only need to display as many decimal places as are
necessary to regenerate the number.  So if x happens to be the
floating-point number closest to 1.1, then 1.1 is all you have to show.

By definition, if you type x = 1.1, x will get the floating-point
number closest in value to 1.1.  So x will print as 1.1.  And entering
1.1 will be sufficient to reproduce x exactly.

Thomas Wouters wrote:
> I suspect that the change in float.__repr__() did reduce the number of
> suprises over something like this, though: (taken from a 1.5.2 interpreter)
> 
> >>> x = 1.000000000001
> >>> x
> 1.0
> >>> x == 1.0
> 0

Stick in a

    warning: floating-point numbers should not be tested for equality

and that should help at least somewhat.

If you follow the rule i stated above, you would get this:

    >>> x = 1.1
    >>> x
    1.1
    >>> x == 1.1
    warning: floating-point numbers should not be tested for equality
    1
    >>> x = 1.000000000001
    >>> x
    1.0000000000010001
    >>> x == 1.000000000001
    warning: floating-point numbers should not be tested for equality
    1
    >>> x == 1.0
    warning: floating-point numbers should not be tested for equality
    0

All of this seems quite reasonable to me.



-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From skip at mojam.com  Tue Mar 13 20:48:15 2001
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 13 Mar 2001 13:48:15 -0600 (CST)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <200103131643.LAA01072@cj20424-a.reston1.va.home.com>
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
	<15012.60277.150431.237935@beluga.mojam.com>
	<200103131643.LAA01072@cj20424-a.reston1.va.home.com>
Message-ID: <15022.31103.7828.938707@beluga.mojam.com>

    Guido> Let me annotate these in-line:

    ...

I just added all the names marked "yes".

Skip



From gmcm at hypernet.com  Tue Mar 13 21:02:14 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 15:02:14 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.26445.896017.406266@w221.z064000254.bwi-md.dsl.cnc.net>
References: <20010313185501.A7459@planck.physik.uni-konstanz.de>
Message-ID: <3AAE3676.13712.7B4F001D@localhost>

Can we please get the followups under control? Bernd sent 
me a private email. I replied privately. Then he forwarded to 
Stackless. So I forwarded my reply to Stackless. Now Jeremy 
adds python-dev to the mix.

> >>>>> "BR" == Bernd Rinn <Bernd.Rinn at epost.de> writes:
> 
>   BR> On Tue, Mar 13, 2001 at 12:17:39PM -0500, Gordon McMillan
>   wrote: >> The one instance I can find on the Stackless list (of
>   attempting >> to use a continuation across interpreter
>   invocations) was a call >> the uthread.wait() in __init__.
>   Arguably a (minor) nuisance, >> arguably bad coding practice
>   (even if it worked).
> 
> [explanation of code practice that lead to error omitted]
> 
>   BR> So I suspect that you might end up with a rule of thumb:
> 
>   BR> """ Don't use classes and libraries that use classes when
>   doing BR> IO in microthreaded programs!  """
> 
>   BR> which might indeed be a problem. Am I overlooking something
>   BR> fundamental here?

Synopsis of my reply: this is more a problem with uthreads 
than coroutines. In any (real) thread, you're limited to dealing 
with one non-blocking IO technique (eg, select) without going 
into a busy loop. If you're dedicating a (real) thread to select, it 
makes more sense to use coroutines than uthreads.

> A few other variations on the question come to mind:
> 
>     If a programmer uses a library implement via coroutines, can
>     she call library methods from an __xxx__ method?

Certain situations won't work, but you knew that.
 
>     Can coroutines or microthreads co-exist with callbacks
>     invoked by C extensions? 

Again, in certain situations it won't work. Again, you knew that.
 
>     Can a program do any microthread IO in an __call__ method?

Considering you know the answer to that one too, you could've 
phrased it as a parsable question.
 
> If any of these are the sort "in theory" problems that the PEP
> alludes to, then we need a full spec for what is and is not
> allowed.  It doesn't make sense to tell programmers to follow
> unspecified "reasonable" programming practices.

That's easy. In a nested invocation of the Python interpreter, 
you can't use a coroutine created in an outer interpreter. 

In the Python 2 documentation, there are 6 caveats listed in 
the thread module. That's a couple order of magnitudes 
different from the actual number of ways you can screw up 
using the thread module.

- Gordon



From jeremy at alum.mit.edu  Tue Mar 13 21:22:36 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 15:22:36 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE3676.13712.7B4F001D@localhost>
References: <20010313185501.A7459@planck.physik.uni-konstanz.de>
	<3AAE3676.13712.7B4F001D@localhost>
Message-ID: <15022.33164.673632.351851@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GMcM" == Gordon McMillan <gmcm at hypernet.com> writes:

  GMcM> Can we please get the followups under control? Bernd sent me a
  GMcM> private email. I replied privately. Then he forwarded to
  GMcM> Stackless. So I forwarded my reply to Stackless. Now Jeremy
  GMcM> adds python-dev to the mix.

I had no idea what was going on with forwards and the like.  It looks
like someone "bounced" messages, i.e. sent a message to me or a list
I'm on without including me or the list in the to or cc fields.  So I
couldn't tell how I received the message!  So I restored the original
recipients list of the thread (you, stackless, python-dev).

  >> >>>>> "BR" == Bernd Rinn <Bernd.Rinn at epost.de> writes:
  >> A few other variations on the question come to mind:
  >>
  >> If a programmer uses a library implement via coroutines, can she
  >> call library methods from an __xxx__ method?

  GMcM> Certain situations won't work, but you knew that.

I expected that some won't work, but no one seems willing to tell me
exactly which ones will and which ones won't.  Should the caveat in
the documentation say "avoid using certain __xxx__ methods" <0.9
wink>. 
 
  >> Can coroutines or microthreads co-exist with callbacks invoked by
  >> C extensions?

  GMcM> Again, in certain situations it won't work. Again, you knew
  GMcM> that.

Wasn't sure.
 
  >> Can a program do any microthread IO in an __call__ method?

  GMcM> Considering you know the answer to that one too, you could've
  GMcM> phrased it as a parsable question.

Do I know the answer?  I assume the answer is no, but I don't feel
very certain.
 
  >> If any of these are the sort "in theory" problems that the PEP
  >> alludes to, then we need a full spec for what is and is not
  >> allowed.  It doesn't make sense to tell programmers to follow
  >> unspecified "reasonable" programming practices.

  GMcM> That's easy. In a nested invocation of the Python interpreter,
  GMcM> you can't use a coroutine created in an outer interpreter.

Can we define these situations in a way that doesn't appeal to the
interpreter implementation?  If not, can we at least come up with a
list of what will and will not work at the python level?

  GMcM> In the Python 2 documentation, there are 6 caveats listed in
  GMcM> the thread module. That's a couple order of magnitudes
  GMcM> different from the actual number of ways you can screw up
  GMcM> using the thread module.

The caveats for the thread module seem like pretty minor stuff to me.
If you are writing a threaded application, don't expect code to
continue running after the main thread has exited.

The caveats for microthreads seems to cover a vast swath of territory:
The use of libraries or extension modules that involve callbacks or
instances with __xxx__ methods may lead to application failure.  I
worry about it becomes it doesn't sound very modular.  The use of
coroutines in one library means I can't use that library in certain
special cases in my own code.

I'm sorry if I sound grumpy, but I feel like I can't get a straight
answer despite several attempts.  At some level, it's fine to say that
there are some corner cases that won't work well with microthreads or
coroutines implemented on top of stackless python.  But I think the
PEP should discuss the details.  I've never written in an application
that uses stackless-based microthreads or coroutines so I don't feel
confident in my judgement of the situation.

Which gets back to Bernd's original question:

  GMcM> >   BR> """ Don't use classes and libraries that use classes when
  GMcM> >   BR> IO in microthreaded programs!  """
  GMcM> > 
  GMcM> >   BR> which might indeed be a problem. Am I overlooking something
  GMcM> >   BR> fundamental here?

and the synopsis of your answer:

  GMcM> Synopsis of my reply: this is more a problem with uthreads 
  GMcM> than coroutines. In any (real) thread, you're limited to dealing 
  GMcM> with one non-blocking IO technique (eg, select) without going 
  GMcM> into a busy loop. If you're dedicating a (real) thread to select, it 
  GMcM> makes more sense to use coroutines than uthreads.

I don't understand how this addresses the question, but perhaps I
haven't seen your reply yet.  Mail gets through to python-dev and
stackless at different rates.

Jeremy



From bckfnn at worldonline.dk  Tue Mar 13 21:34:17 2001
From: bckfnn at worldonline.dk (Finn Bock)
Date: Tue, 13 Mar 2001 20:34:17 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15021.24645.357064.856281@anthem.wooz.org>
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org>
Message-ID: <3aae83f7.41314216@smtp.worldonline.dk>

>    GvR> Yes, that was on the list once but got dropped.  You might
>    GvR> want to get together with Finn and Samuele to see what their
>    GvR> rules are.  (They allow the use of some keywords at least as
>    GvR> keyword=expression arguments and as object.attribute names.)

[Barry]

>I'm actually a little surprised that the "Jython vs. CPython"
>differences page doesn't describe this (or am I missing it?):

It is mentioned at the bottom of 

     http://www.jython.org/docs/usejava.html

>    http://www.jython.org/docs/differences.html
>
>I thought it used to.

I have now also added it to the difference page.

>IIRC, keywords were allowed if there was no question of it introducing
>a statement.  So yes, keywords were allowed after the dot in attribute
>lookups, and as keywords in argument lists, but not as variable names
>on the lhs of an assignment (I don't remember if they were legal on
>the rhs, but it seems like that ought to be okay, and is actually
>necessary if you allow them argument lists).

- after "def"
- after a dot "." in trailer
- after "import"
- after "from" (in an import stmt)
- and as keyword argument names in arglist

>It would eliminate much of the need for writing obfuscated code like
>"class_" or "klass".

Not the rules as Jython currently has it. Jython only allows the *use*
of external code which contain reserved words as class, method or
attribute names, including overriding such methods.

The distinction between the Name and AnyName grammar productions have
worked very well for us, but I don't think of it as a general "keywords
can be used as identifiers" feature.

regards,
finn



From barry at digicool.com  Tue Mar 13 21:44:04 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Tue, 13 Mar 2001 15:44:04 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl>
	<200103122332.SAA22948@cj20424-a.reston1.va.home.com>
	<15021.24645.357064.856281@anthem.wooz.org>
	<3aae83f7.41314216@smtp.worldonline.dk>
Message-ID: <15022.34452.183052.362184@anthem.wooz.org>

>>>>> "FB" == Finn Bock <bckfnn at worldonline.dk> writes:

    | - and as keyword argument names in arglist

I think this last one doesn't work:

-------------------- snip snip --------------------
Jython 2.0 on java1.3.0 (JIT: jitc)
Type "copyright", "credits" or "license" for more information.
>>> def foo(class=None): pass
Traceback (innermost last):
  (no code object) at line 0
  File "<console>", line 1
	def foo(class=None): pass
	        ^
SyntaxError: invalid syntax
>>> def foo(print=None): pass
Traceback (innermost last):
  (no code object) at line 0
  File "<console>", line 1
	def foo(print=None): pass
	        ^
SyntaxError: invalid syntax
-------------------- snip snip --------------------

-Barry



From akuchlin at mems-exchange.org  Tue Mar 13 22:33:31 2001
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 13 Mar 2001 16:33:31 -0500
Subject: [Python-Dev] Removing doc/howto on python.org
Message-ID: <E14cwQ7-0003q3-00@ute.cnri.reston.va.us>

Looking at a bug report Fred forwarded, I realized that after
py-howto.sourceforge.net was set up, www.python.org/doc/howto was
never changed to redirect to the SF site instead.  As of this
afternoon, that's now done; links on www.python.org have been updated,
and I've added the redirect.

Question: is it worth blowing away the doc/howto/ tree now, or should
it just be left there, inaccessible, until work on www.python.org
resumes?

--amk



From tismer at tismer.com  Tue Mar 13 23:44:22 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 23:44:22 +0100
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
References: <200103131447.HAA32016@localhost.localdomain>
		<3AAE38C3.2C9BAA08@tismer.com> <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAEA2C6.7F1DD2CE@tismer.com>


Jeremy Hylton wrote:
> 
> >>>>> "CT" == Christian Tismer <tismer at tismer.com> writes:
> 
>   CT> Maybe I'm repeating myself, but I'd like to clarify: I do not
>   CT> plan to introduce anything that forces anybody to change her
>   CT> code. This is all about extending the current capabilities.
> 
> The problem with this position is that C code that uses the old APIs
> interferes in odd ways with features that depend on stackless,
> e.g. the __xxx__ methods.[*]  If the old APIs work but are not
> compatible, we'll end up having to rewrite all our extensions so that
> they play nicely with stackless.

My idea was to keep all interfaces as they are, add a stackless flag,
and add stackless versions of all those calls. These are used when
they exist. If not, the old, recursive calls are used. If we can
find such a flag, we're fine. If not, we're hosed.
There is no point in forcing everybody to play nicely with Stackless.

> If we change the core and standard extensions to use stackless
> interfaces, then this style will become the standard style.  If the
> interface is simple, this is no problem.  If the interface is complex,
> it may be a problem.  My point is that if we change the core APIs, we
> place a new burden on extension writers.

My point is that if we extend the core APIs, we do not place
a burden on extension writers, given that we can do the extension
in a transparent way.

> Jeremy
> 
>     [*] If we fix the type-class dichotomy, will it have any effect on
>     the stackful nature of some of these C calls?

I truely cannot answer this one.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From gmcm at hypernet.com  Tue Mar 13 23:16:24 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 17:16:24 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.33164.673632.351851@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE3676.13712.7B4F001D@localhost>
Message-ID: <3AAE55E8.4865.7BC9D6B2@localhost>

[Jeremy]
>   >> If a programmer uses a library implement via coroutines, can
>   she >> call library methods from an __xxx__ method?
> 
>   GMcM> Certain situations won't work, but you knew that.
> 
> I expected that some won't work, but no one seems willing to tell
> me exactly which ones will and which ones won't.  Should the
> caveat in the documentation say "avoid using certain __xxx__
> methods" <0.9 wink>. 

Within an __xxx__ method, you cannot *use* a coroutine not 
created in that method. That is true in current Stackless and 
will be true in Stack-lite. The presence of "library" in the 
question is a distraction.

I guess if you think of a coroutine as just another kind of 
callable object, this looks like a strong limitation. But you 
don't find yourself thinking of threads as plain old callable 
objects, do you? In a threaded program, no matter how 
carefully designed, there is a lot of thread detritus lying 
around. If you don't stay concious of the transfers of control 
that may happen, you will screw up.

Despite the limitation on using coroutines in magic methods, 
coroutines have an advantage in that tranfers of control only 
happen when you want them to. So avoiding unwanted 
transfers of control is vastly easier.
 
>   >> Can coroutines or microthreads co-exist with callbacks
>   invoked by >> C extensions?
> 
>   GMcM> Again, in certain situations it won't work. Again, you
>   knew GMcM> that.
> 
> Wasn't sure.

It's exactly the same situation.
 
>   >> Can a program do any microthread IO in an __call__ method?
> 
>   GMcM> Considering you know the answer to that one too, you
>   could've GMcM> phrased it as a parsable question.
> 
> Do I know the answer?  I assume the answer is no, but I don't
> feel very certain.

What is "microthreaded IO"? Probably the attempt to yield 
control if the IO operation would block. Would doing that 
inside __call__ work with microthreads? No. 

It's not my decision over whether this particular situation 
needs to be documented. Somtime between the 2nd and 5th 
times the programmer encounters this exception, they'll say 
"Oh phooey, I can't do this in __call__, I need an explicit 
method instead."  Python has never claimed that __xxx__ 
methods are safe as milk. Quite the contrary.

 
>   >> If any of these are the sort "in theory" problems that the
>   PEP >> alludes to, then we need a full spec for what is and is
>   not >> allowed.  It doesn't make sense to tell programmers to
>   follow >> unspecified "reasonable" programming practices.
> 
>   GMcM> That's easy. In a nested invocation of the Python
>   interpreter, GMcM> you can't use a coroutine created in an
>   outer interpreter.
> 
> Can we define these situations in a way that doesn't appeal to
> the interpreter implementation? 

No, because it's implementation dependent.

> If not, can we at least come up
> with a list of what will and will not work at the python level?

Does Python attempt to catalogue all the ways you can screw 
up using magic methods? Using threads? How 'bout the 
metaclass hook? Even stronger, do we catalogue all the ways 
that an end-user-programmer can get bit by using a library 
written by someone else that makes use of these facilities?
 
>   GMcM> In the Python 2 documentation, there are 6 caveats listed
>   in GMcM> the thread module. That's a couple order of magnitudes
>   GMcM> different from the actual number of ways you can screw up
>   GMcM> using the thread module.
> 
> The caveats for the thread module seem like pretty minor stuff to
> me. If you are writing a threaded application, don't expect code
> to continue running after the main thread has exited.

Well, the thread caveats don't mention the consequences of 
starting and running a thread within an __init__ method.  

> The caveats for microthreads seems to cover a vast swath of
> territory: The use of libraries or extension modules that involve
> callbacks or instances with __xxx__ methods may lead to
> application failure. 

While your statement is true on the face of it, it is very 
misleading. Things will only fall apart when you code an 
__xxx__ method or callback that uses a pre-existing coroutine 
(or does a uthread swap). You can very easily get in trouble 
right now with threads and callbacks. But the real point is that 
it is *you* the programmer trying to do something that won't 
work (and, BTW, getting notified right away), not some library 
pulling a fast one on you. (Yes, the library could make things 
very hard for you, but that's nothing new.)

Application programmers do not need magic methods. Ever. 
They are very handy for people creating libraries for application 
programmers to use, but we already presume (naively) that 
these people know what they're doing.

> I worry about it becomes it doesn't sound
> very modular.  The use of coroutines in one library means I can't
> use that library in certain special cases in my own code.

With a little familiarity, you'll find that coroutines are a good 
deal more modular than threads.

In order for that library to violate your expectations, that library 
must be concious of multiple coroutines (otherwise, it's just a 
plain stackfull call / return). It must have kept a coroutine from 
some other call, or had you pass one in. So you (if at all 
cluefull <wink>) will be concious that something is going on 
here.

The issue is the same as if you used a framework which used 
real threads, but never documented anything about the 
threads. You code callbacks that naively and independently 
mutate a global collection. Do you blame Python?

> I'm sorry if I sound grumpy, but I feel like I can't get a
> straight answer despite several attempts.  At some level, it's
> fine to say that there are some corner cases that won't work well
> with microthreads or coroutines implemented on top of stackless
> python.  But I think the PEP should discuss the details.  I've
> never written in an application that uses stackless-based
> microthreads or coroutines so I don't feel confident in my
> judgement of the situation.

And where on the fearful to confident scale was the Jeremy 
just getting introduced to threads?
 
> Which gets back to Bernd's original question:
> 
>   GMcM> >   BR> """ Don't use classes and libraries that use
>   classes when GMcM> >   BR> IO in microthreaded programs!  """
>   GMcM> > GMcM> >   BR> which might indeed be a problem. Am I
>   overlooking something GMcM> >   BR> fundamental here?
> 
> and the synopsis of your answer:
> 
>   GMcM> Synopsis of my reply: this is more a problem with
>   uthreads GMcM> than coroutines. In any (real) thread, you're
>   limited to dealing GMcM> with one non-blocking IO technique
>   (eg, select) without going GMcM> into a busy loop. If you're
>   dedicating a (real) thread to select, it GMcM> makes more sense
>   to use coroutines than uthreads.
> 
> I don't understand how this addresses the question, but perhaps I
> haven't seen your reply yet.  Mail gets through to python-dev and
> stackless at different rates.

Coroutines only swap voluntarily. It's very obvious where these 
transfers of control take place hence simple to control when 
they take place. My suspicion is that most people use 
uthreads because they use a familiar model. Not many people 
are used to coroutines, but many situations would be more 
profitably approached with coroutines than uthreads.

- Gordon



From fredrik at pythonware.com  Wed Mar 14 01:28:20 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 01:28:20 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org>
Message-ID: <000b01c0ac1d$ad79bec0$e46940d5@hagrid>

barry wrote:
>
>    | - and as keyword argument names in arglist
>
> I think this last one doesn't work:
> 
> -------------------- snip snip --------------------
> Jython 2.0 on java1.3.0 (JIT: jitc)
> Type "copyright", "credits" or "license" for more information.
> >>> def foo(class=None): pass
> Traceback (innermost last):
>   (no code object) at line 0
>   File "<console>", line 1
> def foo(class=None): pass
>         ^
> SyntaxError: invalid syntax
> >>> def foo(print=None): pass
> Traceback (innermost last):
>   (no code object) at line 0
>   File "<console>", line 1
> def foo(print=None): pass
>         ^
> SyntaxError: invalid syntax
> -------------------- snip snip --------------------

>>> def spam(**kw):
...     print kw
...
>>> spam(class=1)
{'class': 1}
>>> spam(print=1)
{'print': 1}

Cheers /F




From guido at digicool.com  Wed Mar 14 01:55:54 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 19:55:54 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: Your message of "Tue, 13 Mar 2001 17:16:24 EST."
             <3AAE55E8.4865.7BC9D6B2@localhost> 
References: <3AAE3676.13712.7B4F001D@localhost>  
            <3AAE55E8.4865.7BC9D6B2@localhost> 
Message-ID: <200103140055.TAA02495@cj20424-a.reston1.va.home.com>

I've been following this discussion anxiously.  There's one
application of stackless where I think the restrictions *do* come into
play.  Gordon wrote a nice socket demo where multiple coroutines or
uthreads were scheduled by a single scheduler that did a select() on
all open sockets.  I would think that if you use this a lot, e.g. for
all your socket I/O, you might get in trouble sometimes when you
initiate a socket operation from within e.g. __init__ but find you
have to complete it later.

How realistic is this danger?  How serious is this demo?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From greg at cosc.canterbury.ac.nz  Wed Mar 14 02:28:49 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Mar 2001 14:28:49 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE0FE3.2206.7AB85588@localhost>
Message-ID: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>

Gordon McMillan <gmcm at hypernet.com>:

> But magic methods are a convenience. There's 
> absolutely nothing there that can't be done another way.

Strictly speaking that's true, but from a practical standpoint
I think you will *have* to address __init__ at least, because
it is so ubiquitous and ingrained in the Python programmer's
psyche. Asking Python programmers to give up using __init__
methods will be greeted with about as much enthusiasm as if
you asked them to give up using all identifiers containing
the leter 'e'. :-)

>  - a GUI. Again, no big deal

Sorry, but I think it *is* a significantly large deal...

> be careful that the other threads don't 
> touch the GUI directly. It's basically the same issue with 
> Stackless.

But the other threads don't have to touch the GUI directly
to be a problem.

Suppose I'm building an IDE and I want a button which spawns
a microthread to execute the user's code. The thread doesn't
make any GUI calls itself, but it's spawned from inside a
callback, which, if I understand correctly, will be impossible.

> The one comparable situation 
> in normal Python is crossing threads in callbacks. With the 
> exception of a couple of complete madmen (doing COM 
> support), everyone else learns to avoid the situation.

But if you can't even *start* a thread using a callback,
how do you do anything with threads at all?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From gmcm at hypernet.com  Wed Mar 14 03:22:44 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 21:22:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140055.TAA02495@cj20424-a.reston1.va.home.com>
References: Your message of "Tue, 13 Mar 2001 17:16:24 EST."             <3AAE55E8.4865.7BC9D6B2@localhost> 
Message-ID: <3AAE8FA4.31567.7CAB5C89@localhost>

[Guido]
> I've been following this discussion anxiously.  There's one
> application of stackless where I think the restrictions *do* come
> into play.  Gordon wrote a nice socket demo where multiple
> coroutines or uthreads were scheduled by a single scheduler that
> did a select() on all open sockets.  I would think that if you
> use this a lot, e.g. for all your socket I/O, you might get in
> trouble sometimes when you initiate a socket operation from
> within e.g. __init__ but find you have to complete it later.

Exactly as hard as it is not to run() a thread from within the 
Thread __init__. Most threaders have probably long forgotten 
that they tried that -- once.

> How realistic is this danger?  How serious is this demo?

It's not a demo. It's in use (proprietary code layered on top of 
SelectDispatcher which is open) as part of a service a major 
player in the video editting industry has recently launched, 
both on the client and server side. Anyone in that industry can 
probably figure out who and (if they read the trades) maybe 
even what from the above, but I'm not comfortable saying more 
publicly.

- Gordon



From gmcm at hypernet.com  Wed Mar 14 03:55:44 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 21:55:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>
References: <3AAE0FE3.2206.7AB85588@localhost>
Message-ID: <3AAE9760.19887.7CC991FF@localhost>

Greg Ewing wrote:

> Gordon McMillan <gmcm at hypernet.com>:
> 
> > But magic methods are a convenience. There's 
> > absolutely nothing there that can't be done another way.
> 
> Strictly speaking that's true, but from a practical standpoint I
> think you will *have* to address __init__ at least, because it is
> so ubiquitous and ingrained in the Python programmer's psyche.
> Asking Python programmers to give up using __init__ methods will
> be greeted with about as much enthusiasm as if you asked them to
> give up using all identifiers containing the leter 'e'. :-)

No one's asking them to give up __init__. Just asking them 
not to transfer control from inside an __init__. There are good 
reasons not to transfer control to another thread from within an 
__init__, too.
 
> >  - a GUI. Again, no big deal
> 
> Sorry, but I think it *is* a significantly large deal...
> 
> > be careful that the other threads don't 
> > touch the GUI directly. It's basically the same issue with
> > Stackless.
> 
> But the other threads don't have to touch the GUI directly
> to be a problem.
> 
> Suppose I'm building an IDE and I want a button which spawns a
> microthread to execute the user's code. The thread doesn't make
> any GUI calls itself, but it's spawned from inside a callback,
> which, if I understand correctly, will be impossible.

For a uthread, if it swaps out, yes, because that's an attempt 
to transfer to another uthread not spawned by the callback. So 
you will get an exception if you try it. If you simply want to 
create and use coroutines from within the callback, that's fine 
(though not terribly useful, since the GUI is blocked till you're 
done).
 
> > The one comparable situation 
> > in normal Python is crossing threads in callbacks. With the
> > exception of a couple of complete madmen (doing COM support),
> > everyone else learns to avoid the situation.
> 
> But if you can't even *start* a thread using a callback,
> how do you do anything with threads at all?

Checking the couple GUIs I've done that use threads (mostly I 
use idletasks in a GUI for background stuff) I notice I create 
the threads before starting the GUI. So in this case, I'd 
probably have a worker thread (real) and the GUI thread (real). 
The callback would queue up some work for the worker thread 
and return. The worker thread can use continuations or 
uthreads all it wants.

My comments about GUIs were basically saying that you 
*have* to think about this stuff when you design a GUI - they 
all have rather strong opinions about how you app should be 
architected. You can get into trouble with any of the 
techniques (events, threads, idletasks...) they promote / allow 
/ use. I know it's gotten better, but not very long ago you had 
to be very careful simply to get TK and threads to coexist.

I usually use idle tasks precisely because the chore of 
breaking my task into 0.1 sec chunks is usually less onerous 
than trying to get the GUI to let me do it some other way.

[Now I'll get floods of emails telling me *this* GUI lets me do it 
*that* way...  As far as I'm concerned, "least worst" is all any 
GUI can aspire to.]

- Gordon



From tim.one at home.com  Wed Mar 14 04:04:31 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 13 Mar 2001 22:04:31 -0500
Subject: [Python-Dev] comments on PEP 219
In-Reply-To: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIHJFAA.tim.one@home.com>

[Jeremy Hylton]
> ...
> One other set of issues, that is sort-of out of bounds for this
> particular PEP, is what control features do we want that can only be
> implemented with stackless.  Can we implement generators or coroutines
> efficiently without a stackless approach?

Icon/CLU-style generator/iterators always return/suspend directly to their
immediate caller/resumer, so it's impossible to get a C stack frame "stuck in
the middle":  whenever they're ready to yield (suspend or return), there's
never anything between them and the context that gave them control  (and
whether the context was coded in C or Python -- generators don't care).

While Icon/CLU do not do so, a generator/iterator in this sense can be a
self-contained object, passed around and resumed by anyone who feels like it;
this kind of object is little more than a single Python execution frame,
popped from the Python stack upon suspension and pushed back on upon
resumption.  For this reason, recursive interpreter calls don't bother it:
whenever it stops or pauses, it's at the tip of the current thread of
control, and returns control to "the next" frame, just like a vanilla
function return.  So if the stack is a linear list in the absence of
generators, it remains so in their presence.  It also follows that it's fine
to resume a generator by making a recursive call into the interpreter (the
resumption sequence differs from a function call in that it must set up the
guts of the eval loop from the state saved in the generator's execution
frame, rather than create a new execution frame).

But Guido usually has in mind a much fancier form of generator (note:  contra
PEP 219, I didn't write generator.py -- Guido wrote that after hearing me say
"generator" and falling for Majewski's hypergeneralization of the concept
<0.8 wink>), which can suspend to *any* routine "up the chain".  Then C stack
frames can certainly get stuck in the middle, and so that style of generator
is much harder to implement given the way the interpreter currently works.
In Icon *this* style of "generator" is almost never used, in part because it
requires using Icon's optional "co-expression" facilities (which are optional
because they require hairy platform-dependent assembler to trick the platform
C into supporting multiple stacks; Icon's generators don't need any of that).
CLU has nothing like it.

Ditto for coroutines.




From skip at pobox.com  Wed Mar 14 04:12:02 2001
From: skip at pobox.com (Skip Montanaro)
Date: Tue, 13 Mar 2001 21:12:02 -0600 (CST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9760.19887.7CC991FF@localhost>
References: <3AAE0FE3.2206.7AB85588@localhost>
	<3AAE9760.19887.7CC991FF@localhost>
Message-ID: <15022.57730.265706.483989@beluga.mojam.com>

>>>>> "Gordon" == Gordon McMillan <gmcm at hypernet.com> writes:

    Gordon> No one's asking them to give up __init__. Just asking them not
    Gordon> to transfer control from inside an __init__. There are good
    Gordon> reasons not to transfer control to another thread from within an
    Gordon> __init__, too.
 
Is this same restriction placed on all "magic" methods like __getitem__?  Is
this the semantic difference between Stackless and CPython that people are
getting all in a lather about?

Skip






From gmcm at hypernet.com  Wed Mar 14 04:25:03 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 22:25:03 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.57730.265706.483989@beluga.mojam.com>
References: <3AAE9760.19887.7CC991FF@localhost>
Message-ID: <3AAE9E3F.9635.7CE46C9C@localhost>

> >>>>> "Gordon" == Gordon McMillan <gmcm at hypernet.com> writes:
> 
>     Gordon> No one's asking them to give up __init__. Just asking
>     them not Gordon> to transfer control from inside an __init__.
>     There are good Gordon> reasons not to transfer control to
>     another thread from within an Gordon> __init__, too.
> 
> Is this same restriction placed on all "magic" methods like
> __getitem__?  

In the absence of making them interpreter-recursion free, yes.

> Is this the semantic difference between Stackless
> and CPython that people are getting all in a lather about?

What semantic difference? You can't transfer control to a 
coroutine / urthread in a magic method in CPython, either 
<wink>.

- Gordon



From jeremy at alum.mit.edu  Wed Mar 14 02:17:39 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 20:17:39 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9E3F.9635.7CE46C9C@localhost>
References: <3AAE9760.19887.7CC991FF@localhost>
	<3AAE9E3F.9635.7CE46C9C@localhost>
Message-ID: <15022.50867.210827.597710@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GMcM" == Gordon McMillan <gmcm at hypernet.com> writes:

  >> Is this the semantic difference between Stackless and CPython
  >> that people are getting all in a lather about?

  GMcM> What semantic difference? You can't transfer control to a
  GMcM> coroutine / urthread in a magic method in CPython, either
  GMcM> <wink>.

If I have a library or class that uses threads under the covers, I can
create the threads in whatever code block I want, regardless of what
is on the call stack above the block.  The reason that coroutines /
uthreads are different is that the semantics of control transfers are
tied to what the call stack looks like a) when the thread is created
and b) when a control transfer is attempted.

This restriction seems quite at odds with modularity.  (Could I import
a module that creates a thread within an __init__ method?)  The
correctness of a library or class depends on the entire call chain
involved in its use.

It's not at all modular, because a programmer could make a local
decision about organizing a particular module and cause errors in a
module that don't even use directly.  This would occur if module A
uses uthreads, module B is a client of module A, and the user writes a
program that uses module B.  He unsuspectingly adds a call to module A
in an __init__ method and *boom*.

Jeremy

"Python is a language in which the use of uthreads in a module you
didn't know existed can render your own program unusable."  <wink>



From greg at cosc.canterbury.ac.nz  Wed Mar 14 06:09:42 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Mar 2001 18:09:42 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9760.19887.7CC991FF@localhost>
Message-ID: <200103140509.SAA05205@s454.cosc.canterbury.ac.nz>

> I'd probably have a worker thread (real) and the GUI thread (real). 

If I have to use real threads to get my uthreads to work
properly, there doesn't seem to be much point in using
uthreads to begin with.

> you *have* to think about this stuff when you design a GUI...
> You can get into trouble with any of the techniques...
> not very long ago you had to be very careful simply to get 
> TK and threads to coexist.

Microthreads should *free* one from all that nonsense. They
should be simple, straightforward, easy to use, and bulletproof.
Instead it seems they're going to be just as tricky to use
properly, only in different ways.

Oh, well, perhaps I'll take another look after a few more
releases and see if anything has improved.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Wed Mar 14 06:34:11 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 00:34:11 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEIMJFAA.tim.one@home.com>

[Paul Prescod]
> David Ascher suggested during the talk that comparisons of floats could
> raise a warning unless you turned that warning off (which only
> knowledgable people would do). I think that would go a long way to
> helping them find and deal with serious floating point inaccuracies in
> their code.

It would go a very short way -- but that may be better than nothing.  Most fp
disasters have to do with "catastrophic cancellation" (a tech term, not a
pejorative), and comparisons have nothing to do with those.  Alas, CC can't
be detected automatically short of implementing interval arithmetic, and even
then tends to raise way too many false alarms unless used in algorithms
designed specifically to exploit interval arithmetic.

[Guido]
> You mean only for == and !=, right?

You have to do all comparisons or none (see below), but in the former case a
warning is silly (groundless paranoia) *unless* the comparands are "close".

Before we boosted repr(float) precision so that people could *see* right off
that they didn't understand Python fp arithmetic, complaints came later.  For
example, I've lost track of how many times I've explained variants of this
one:

Q: How come this loop goes around 11 times?

>>> delta = 0.1
>>> x = 0.0
>>> while x < 1.0:   # no == or != here
...     print x
...     x = x + delta
...

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
>>>

A: It's because 0.1 is not exactly representable in binary floating-point.

Just once out of all those times, someone came back several days later after
spending many hours struggling to understand what that really meant and
implied.  Their followup question was depressingly insightful:

Q. OK, I understand now that for 754 doubles, the closest possible
   approximation to one tenth is actually a little bit *larger* than
   0.1.  So how come when I add a thing *bigger* than one tenth together
   ten times, I get a result *smaller* than one?

the-fun-never-ends-ly y'rs  - tim




From tim.one at home.com  Wed Mar 14 07:01:24 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 01:01:24 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <Pine.LNX.4.10.10103131039260.13108-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIOJFAA.tim.one@home.com>

[Ka-Ping Yee]
> I'll argue now -- just as i argued back then, but louder! -- that
> this isn't necessary.  repr(1.1) can be 1.1 without losing any precision.
>
> Simply stated, you only need to display as many decimal places as are
> necessary to regenerate the number.  So if x happens to be the
> floating-point number closest to 1.1, then 1.1 is all you have to show.
>
> By definition, if you type x = 1.1, x will get the floating-point
> number closest in value to 1.1.

This claim is simply false unless the platform string->float routines do
proper rounding, and that's more demanding than even the anal 754 std
requires (because in the general case proper rounding requires bigint
arithmetic).

> So x will print as 1.1.

By magic <0.1 wink>?

This *can* work, but only if Python does float<->string conversions itself,
leaving the platform libc out of it.  I gave references to directly relevant
papers, and to David Gay's NETLIB implementation code, the last time we went
thru this.  Note that Gay's code bristles with platform #ifdef's, because
there is no portable way in C89 to get the bit-level info this requires.
It's some of the most excruciatingly delicate code I've ever plowed thru.  If
you want to submit it as a patch, I expect Guido will require a promise in
blood that he'll never have to maintain it <wink>.

BTW, Scheme implementations are required to do proper rounding in both
string<->float directions, and minimal-length (wrt idempotence) float->string
conversions (provided that a given Scheme supports floats at all).  That was
in fact the original inspiration for Clinger, Steele and White's work in this
area.  It's exactly what you want too (because it's exactly what you need to
make your earlier claims true).  A more recent paper by Dybvig and ??? (can't
remember now) builds on the earlier work, using Gay's code by reference as a
subroutine, and speeding some of the other cases where Gay's code is slothful
by a factor of about 70.

scheme-does-a-better-job-on-numerics-in-many-respects-ly y'rs  - tim




From tim.one at home.com  Wed Mar 14 07:21:57 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 01:21:57 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140509.SAA05205@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEIPJFAA.tim.one@home.com>

[Greg Ewing]
> If I have to use real threads to get my uthreads to work
> properly, there doesn't seem to be much point in using
> uthreads to begin with.
> ...
> Microthreads should *free* one from all that nonsense. They
> should be simple, straightforward, easy to use, and bulletproof.
> Instead it seems they're going to be just as tricky to use
> properly, only in different ways.

Stackless uthreads don't exist to free you from nonsense, they exist because
they're much lighter than OS-level threads.  You can have many more of them
and context switching is much quicker.  Part of the price is that they're not
as flexible as OS-level threads:  because they get no support at all from the
OS, they have no way to deal with the way C (or any other language) uses the
HW stack (from where most of the odd-sounding restrictions derive).

One thing that impressed me at the Python Conference last week was how many
of the talks I attended presented work that relied on, or was in the process
of moving to, Stackless.  This stuff has *very* enthused users!  Unsure how
many rely on uthreads vs how many on coroutines (Stackless wasn't the focus
of any these talks), but they're the same deal wrt restrictions.

BTW, I don't know of a coroutine facility in any x-platform language that
plays nicely (in the sense of not imposing mounds of implementation-derived
restrictions) across foreign-language boundaries.  If you do, let's get a
reference so we can rip off their secrets.

uthreads-are-much-easier-to-provide-in-an-os-than-in-a-language-ly
    y'rs  - tim




From tim.one at home.com  Wed Mar 14 08:27:21 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 02:27:21 -0500
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <200103131532.f2DFWpw04691@snark.thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>

[Eric S. Raymond]
> I bit the bullet and hand-rolled a recursive-descent expression parser
> for CML2 to replace the Earley-algorithm parser described in my
> previous note.  It is a little more than twice as fast as the SPARK
> code, cutting the CML2 compiler runtime almost exactly in half.
>
> Sigh.  I had been intending to recommend SPARK for the Python standard
> library -- as I pointed out in my PC9 paper, it would be the last
> piece stock Python needs to be an effective workbench for
> minilanguage construction.  Unfortunately I'm now convinced Paul
> Prescod is right and it's too slow for production use, at least at
> version 0.6.1.

If all you got out of crafting a one-grammar parser by hand is a measly
factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
parser generators for restricted grammars, in C).  For the all-purpose Earley
parser to get that close is really quite an accomplishment!  SPARK was
written primarily for rapid prototyping, at which it excels (how many times
did you change your grammar during development?  how much longer would it
have taken you to adjust had you needed to rework your RD parser each time?).

perhaps-you're-just-praising-it-via-faint-damnation<wink>-ly y'rs  - tim




From fredrik at pythonware.com  Wed Mar 14 09:25:19 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 09:25:19 +0100
Subject: [Python-Dev] CML2 compiler speedup
References: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>
Message-ID: <014401c0ac60$4f0b1c60$e46940d5@hagrid>

tim wrote:
> If all you got out of crafting a one-grammar parser by hand is a measly
> factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> parser generators for restricted grammars, in C).

talking about performance, has anyone played with using SRE's
lastindex/lastgroup stuff with SPARK?

(is there anything else I could do in SRE to make SPARK run faster?)

Cheers /F




From tismer at tismer.com  Wed Mar 14 10:19:44 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 10:19:44 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <3AAE0FE3.2206.7AB85588@localhost>
		<3AAE9760.19887.7CC991FF@localhost> <15022.57730.265706.483989@beluga.mojam.com>
Message-ID: <3AAF37B0.DFCC027A@tismer.com>


Skip Montanaro wrote:
> 
> >>>>> "Gordon" == Gordon McMillan <gmcm at hypernet.com> writes:
> 
>     Gordon> No one's asking them to give up __init__. Just asking them not
>     Gordon> to transfer control from inside an __init__. There are good
>     Gordon> reasons not to transfer control to another thread from within an
>     Gordon> __init__, too.
> 
> Is this same restriction placed on all "magic" methods like __getitem__?  Is
> this the semantic difference between Stackless and CPython that people are
> getting all in a lather about?

Yes, at the moment all __xxx__ stuff.
The semantic difference is at a different location:
Normal function calls are free to switch around. That is the
big advantage over CPython, which might be called a semantic
difference.
The behavior/contraints of __xxx__ has not changed yet, here
both Pythons are exactly the same! :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From tismer at tismer.com  Wed Mar 14 10:39:17 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 10:39:17 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>
Message-ID: <3AAF3C45.1972981F@tismer.com>


Greg Ewing wrote:

<snip>

> Suppose I'm building an IDE and I want a button which spawns
> a microthread to execute the user's code. The thread doesn't
> make any GUI calls itself, but it's spawned from inside a
> callback, which, if I understand correctly, will be impossible.

This doesn't need to be a problem with Microthreads.
Your IDE can spawn a new process at any time. The
process will simply not be started until the interpreter recursion is
done. I think this is exactly what we want.
Similarily the __init__ situation: Usually you want
to create a new process, but you don't care when it
is scheduled, finally.

So, the only remaining restriction is: If you *force* the
system to schedule microthreads in a recursive call, then
you will be biten by the first uthread that returns to
a frame which has been locked by a different interpreter.

It is pretty fine to create uthreads or coroutines in
the context of __init__. Stackless of course allows
to re-use frames that have been in any recursion. The
point is: After a recrusive interpreter is gone, there
is no problem to use its frames.
We just need to avoid to make __init__ the working
horse, which is bad style, anyway.

> > The one comparable situation
> > in normal Python is crossing threads in callbacks. With the
> > exception of a couple of complete madmen (doing COM
> > support), everyone else learns to avoid the situation.
> 
> But if you can't even *start* a thread using a callback,
> how do you do anything with threads at all?

You can *create* a thread using a callback. It will be started
after the callback is gone. That's sufficient in most cases.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From tim.one at home.com  Wed Mar 14 12:02:12 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 06:02:12 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJIJFAA.tim.one@home.com>

[Guido]
> I'd like to argue about that.  I think the extent to which HWFP
> doesn't work for newbies is mostly related to the change we made in
> 2.0 where repr() (and hence the interactive prompt) show full
> precision, leading to annoyances like repr(1.1) == '1.1000000000000001'.
>
> I've noticed that the number of complaints I see about this went way
> up after 2.0 was released.

Indeed yes, but I think that's a *good* thing.  We can't stop people from
complaining, but we can influence *what* they complain about, and it's
essential for newbies to learn ASAP that they have no idea how binary fp
arithmetic works.  Note that I spend a lot more of my life replying to these
complaints than you <wink>, and I can cut virtually all of them off early now
by pointing to the RepresentationError wiki page.  Before, it was an endless
sequence of "unique" complaints about assorted things that "didn't work
right", and that was much more time-consuming for me.  Of course, it's not a
positive help to the newbies so much as that scaring them early saves them
greater troubles later <no wink>.

Regular c.l.py posters can (& do!) handle this now too, thanks to hearing the
*same* complaint repeatedly now.  For example, over the past two days there
have been more than 40 messages on c.l.py about this, none of them stemming
from the conference or Moshe's PEP, and none of them written by me.  It's a
pattern:

+ A newcomer to Python complains about the interactive-prompt fp display.

+ People quickly uncover that's the least of their problems (that, e.g., they
truly *believe* Python should get dollars and cents exactly right all by
itself, and are programming as if that were true).

+ The fp display is the easiest of all fp surprises to explain fully and
truthfully (although the wiki page should make painfully clear that "easiest"
!= "easy" by a long shot), so is the quickest route toward disabusing them of
their illusions.

+ A few people suggest they use my FixedPoint.py instead; a few more that
they compute using cents instead (using ints or longs); and there's always
some joker who flames that if they're writing code for clients and have such
a poor grasp of fp reality, they should be sued for "technical incompetence".

Except for the flames, this is good in my eyes.

> I expect that most newbies don't use floating point in a fancy way,
> and would never notice it if it was slightly off as long as the output
> was rounded like it was before 2.0.

I couldn't disagree more that ignorance is to be encouraged, either in
newbies or in experts.  Computational numerics is a difficult field with
major consequences in real life, and if the language can't actively *help*
people with that, it should at least avoid encouraging a fool's confidence in
their folly.  If that isn't virulent enough for you <wink>, read Kahan's
recent "Marketing versus Mathematics" rant, here:

    http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf

A point he makes over and over, illustrated with examples, is this:

    Decimal displays of Binary nonintegers cannot always be WYSIWYG.

    Trying to pretend otherwise afflicts both customers and
    implementors with bugs that go mostly misdiagnosed, so ?fixing?
    one bug merely spawns others. 


In a specific example of a nasty real-life bug beginning on page 13, he calls
the conceit (& source of the bug) of rounding fp displays to 15 digits
instead of 17 "a pious fraud".  And he's right.  It spares the implementer
some shallow complaints at the cost of leading naive users down a garden
path, where they end up deeper and deeper in weeds over their heads.

Of course he acknowledges that 17-digit display "[annoys] users who expected
roundoff to degrade only the last displayed digit of simple expressions, and
[confuses] users who did not expect roundoff at all" -- but seeking to fuzz
those truths has worse consequences.

In the end, he smacks up against the same need to favor one group at the
expense of the other:

   Binary floating-point is best for mathematicians, engineers and most
   scientists, and for integers that never get rounded off.  For everyone
   else Decimal floating-point is best because it is the only way What
   You See can be What You Get, which is a big step towards reducing
   programming languages? capture cross-section for programming errors.

He's wrong via omission about the latter, though:  rationals are also a way
to achieve that (so long as you stick to + - * /; decimal fp is still
arguably better once a sqrt or transcendental gets into the pot).

>> Presumably ABC used rationals because usability studies showed
>> they worked best (or didn't they test this?).

> No, I think at best the usability studies showed that floating point
> had problems that the ABC authors weren't able to clearly explain to
> newbies.  There was never an experiment comparing FP to rationals.

>> Presumably the TeachScheme! dialect of Scheme uses rationals for
>> the same reason.

> Probably for the same reasons.

Well, you cannot explain binary fp *clearly* to newbies in reasonable time,
so I can't fault any teacher or newbie-friendly language for running away
from it.  Heck, most college-age newbies are still partly naive about fp
numerics after a good one-semester numerical analysis course (voice of
experience, there).

>> 1/10 and 0.1 are indeed very different beasts to me).

> Another hard question: does that mean that 1 and 1.0 are also very
> different beasts to you?  They weren't to the Alice users who started
> this by expecting 1/4 to represent a quarter turn.

1/4 *is* a quarter turn, and exactly a quarter turn, under every alternative
being discussed (binary fp, decimal fp, rationals).  The only time it isn't
is under Python's current rules.  So the Alice users will (presumably) be
happy with any change whatsoever from the status quo.

They may not be so happy if they do ten 1/10 turns and don't get back to
where they started (which would happen under binary fp, but not decimal fp or
rationals).

Some may even be so unreasonable <wink> as to be unhappy if six 1/6 turns
wasn't a wash (which leaves only rationals as surprise-free).

Paul Dubois wants a way to tag fp literals (see his proposal).  That's
reasonable for his field.  DrScheme's Student levels have a way to tag
literals as inexact too, which allows students to get their toes wet with
binary fp while keeping their gonads on dry land.  Most people can't ride
rationals forever, but they're great training wheels; thoroughly adequate for
dollars-and-cents computations (the denominators don't grow when they're all
the same, so $1.23 computations don't "blow up" in time or space); and a
darned useful tool for dead-serious numeric grownups in sticky numerical
situations (rationals are immune to all of overflow, underflow, roundoff
error, and catastrophic cancellation, when sticking to + - * /).

Given that Python can't be maximally friendly to everyone here, and has a
large base of binary fp users I don't hate at all <wink>, the best I can
dream up is:

    1.3    binary fp, just like now

    1.3_r  exact rational (a tagged fp literal)

    1/3    exact rational

    1./3   binary fp

So, yes, 1.0 and 1 are different beasts to me:  the "." alone and without an
"_r" tag says "I'm an approximation, and approximations are contagious:
inexact in, inexact out".

Note that the only case where this changes the meaning of existing code is

    1/3

But that has to change anyway lest the Alice users stay stuck at 0 forever.

> You know where I'm leaning...  I don't know that newbies are genuinely
> hurt by FP.

They certainly are burned by binary FP if they go on to do any numeric
programming.  The junior high school textbook formula for solving a quadratic
equation is numerically unstable.  Ditto the high school textbook formula for
computing variance.  Etc.  They're *surrounded* by deep pits; but they don't
need to be, except for the lack of *some* way to spell a newbie-friendly
arithmetic type.

> If we do it right, the naive ones will try 11.0/10.0, see
> that it prints 1.1, and be happy;

Cool.  I make a point of never looking at my chest x-rays either <0.9 wink>.

> the persistent ones will try 1.1**2-1.21, ask for an explanation, and
> get a introduction to floating point.  This *doesnt'* have to explain all
> the details, just the two facts that you can lose precision and that 1.1
> isn't representable exactly in binary.

Which leaves them where?  Uncertain & confused (as you say, they *don't* know
all the details, or indeed really any of them -- they just know "things go
wrong", without any handle on predicting the extent of the problems, let
alone any way of controlling them), and without an alternative they *can*
feel confident about (short of sticking to integers, which may well be the
most frequent advice they get on c.l.py).  What kind of way is that to treat
a poor newbie?

I'll close w/ Kahan again:

    Q. Besides its massive size, what distinguishes today?s market for
       floating-point arithmetic from yesteryears? ?

    A. Innocence
       (if not inexperience, na?vet?, ignorance, misconception,
        superstition, 
 )

non-extended-binary-fp-is-an-expert's-tool-ly y'rs  - tim




From bckfnn at worldonline.dk  Wed Mar 14 12:48:51 2001
From: bckfnn at worldonline.dk (Finn Bock)
Date: Wed, 14 Mar 2001 11:48:51 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15022.34452.183052.362184@anthem.wooz.org>
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org> <3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org>
Message-ID: <3aaf5a78.8312542@smtp.worldonline.dk>

>>>>>> "FB" == Finn Bock <bckfnn at worldonline.dk> writes:
>
>    | - and as keyword argument names in arglist
>
>I think this last one doesn't work:

[Barry]

>-------------------- snip snip --------------------
>Jython 2.0 on java1.3.0 (JIT: jitc)
>Type "copyright", "credits" or "license" for more information.
>>>> def foo(class=None): pass
>Traceback (innermost last):
>  (no code object) at line 0
>  File "<console>", line 1
>	def foo(class=None): pass
>	        ^
>SyntaxError: invalid syntax
>>>> def foo(print=None): pass
>Traceback (innermost last):
>  (no code object) at line 0
>  File "<console>", line 1
>	def foo(print=None): pass
>	        ^
>SyntaxError: invalid syntax
>-------------------- snip snip --------------------

You are trying to use it in the grammer production "varargslist". It
doesn't work there. It only works in the grammer production "arglist".

The distinction is a good example of how jython tries to make it
possible to use reserved words defined in external code, but does not
try to allow the use of reserved words everywhere.

regards,
finn



From bckfnn at worldonline.dk  Wed Mar 14 12:49:54 2001
From: bckfnn at worldonline.dk (Finn Bock)
Date: Wed, 14 Mar 2001 11:49:54 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <000b01c0ac1d$ad79bec0$e46940d5@hagrid>
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid>
Message-ID: <3aaf5aa5.8357597@smtp.worldonline.dk>

>barry wrote:
>>
>>    | - and as keyword argument names in arglist
>>
>> I think this last one doesn't work:
>> 
>> -------------------- snip snip --------------------
>> Jython 2.0 on java1.3.0 (JIT: jitc)
>> Type "copyright", "credits" or "license" for more information.
>> >>> def foo(class=None): pass
>> Traceback (innermost last):
>>   (no code object) at line 0
>>   File "<console>", line 1
>> def foo(class=None): pass
>>         ^
>> SyntaxError: invalid syntax
>> >>> def foo(print=None): pass
>> Traceback (innermost last):
>>   (no code object) at line 0
>>   File "<console>", line 1
>> def foo(print=None): pass
>>         ^
>> SyntaxError: invalid syntax
>> -------------------- snip snip --------------------

[/F]

>>>> def spam(**kw):
>...     print kw
>...
>>>> spam(class=1)
>{'class': 1}
>>>> spam(print=1)
>{'print': 1}

Exactly.

This feature is mainly used by constructors for java object where
keywords becomes bean property assignments.

  b = JButton(text="Press Me", enabled=1, size=(30, 40))

is a shorthand for

  b = JButton()
  b.setText("Press Me")
  b.setEnabled(1)
  b.setSize(30, 40)

Since the bean property names are outside Jython's control, we allow
AnyName in that position.

regards,
finn



From fredrik at pythonware.com  Wed Mar 14 14:09:51 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 14:09:51 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid> <3aaf5aa5.8357597@smtp.worldonline.dk>
Message-ID: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>

finn wrote:

> >>>> spam(class=1)
> >{'class': 1}
> >>>> spam(print=1)
> >{'print': 1}
> 
> Exactly.

how hard would it be to fix this in CPython?  can it be
done in time for 2.1?  (Thomas?)

Cheers /F




From thomas at xs4all.net  Wed Mar 14 14:58:50 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 14 Mar 2001 14:58:50 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>; from fredrik@pythonware.com on Wed, Mar 14, 2001 at 02:09:51PM +0100
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid> <3aaf5aa5.8357597@smtp.worldonline.dk> <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
Message-ID: <20010314145850.D404@xs4all.nl>

On Wed, Mar 14, 2001 at 02:09:51PM +0100, Fredrik Lundh wrote:
> finn wrote:

> > >>>> spam(class=1)
> > >{'class': 1}
> > >>>> spam(print=1)
> > >{'print': 1}
> > 
> > Exactly.

> how hard would it be to fix this in CPython?  can it be
> done in time for 2.1?  (Thomas?)

Well, monday night my jetlag hit very badly (I flew back on the night from
saturday to sunday) and caused me to skip an entire night of sleep. I spent
part of that breaking my brain over the parser :) I have no experience with
parsers or parser-writing, by the way, so this comes hard to me, and I have
no clue how this is solved in other parsers.

I seriously doubt it can be done for 2.1, unless someone knows parsers well
and can deliver an extended version of the current parser well before the
next beta. Changing the parser to something not so limited as our current
parser would be too big a change to slip in right before 2.1. 

Fixing the current parser is possible, but not straightforward. As far as I
can figure out, the parser first breaks up the file in elements and then
classifies the elements, and if an element cannot be classified, it is left
as bareword for the subsequent passes to catch it as either a valid
identifier in a valid context, or a syntax error.

I guess it should be possible to hack the parser so it accepts other
statements where it expects an identifier, and then treats those statements
as strings, but you can't just accept all statements -- some will be needed
to bracket the identifier, or you get weird behaviour when you say 'def ()'.
So you need to maintain a list of acceptible statements and try each of
those... My guess is that it's possible, I just haven't figured out how to
do it yet. Can we force a certain 'ordering' in the keywords (their symbolic
number as #defined in graminit.h) some way ?

Another solution would be to do it explicitly in Grammar. I posted an
attempt at that before, but it hurts. It can be done in two ways, both of
which hurt for different reasons :) For example,

funcdef: 'def' NAME parameters ':' suite

can be changed in

funcdef: 'def' nameorkw parameters ':' suite
nameorkw: NAME | 'def' | 'and' | 'pass' | 'print' | 'return' | ...

or in

funcdef: 'def' (NAME | 'def' | 'and' | 'pass' | 'print' | ...) parameters ':' suite

The first means changing the places that currently accept a NAME, and that
means that all places where the compiler does STR(node) have to be checked.
There is a *lot* of those, and it isn't directly obvious whether they expect
node to be a NAME, or really know that, or think they know that. STR() could
be made to detect 'nameorkw' nodetypes and get the STR() of its first child
if so, but that's really an ugly hack.

The second way is even more of an ugly hack, but it doesn't require any
changes in the parser. It just requires making the Grammar look like random
garbage :) Of course, we could keep the grammar the way it is, and
preprocess it before feeding it to the parser, extracting all keywords
dynamically and sneakily replacing NAME with (NAME | keywords )... hmm...
that might actually be workable. It would still be a hack, though.

Now-for-something-easy--meetings!-ly y'rs ;)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Wed Mar 14 15:03:21 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 15:03:21 +0100
Subject: [Python-Dev] OT: careful with that perl code
Message-ID: <011601c0ac8f$8cb66b80$0900a8c0@SPIFF>

http://slashdot.org/article.pl?sid=01/03/13/208259&mode=nocomment

    "because he wasn't familiar with the distinction between perl's
    scalar and list context, S. now has a police record"




From jeremy at alum.mit.edu  Wed Mar 14 15:25:49 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 14 Mar 2001 09:25:49 -0500 (EST)
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
References: <20010312220425.T404@xs4all.nl>
	<200103122332.SAA22948@cj20424-a.reston1.va.home.com>
	<15021.24645.357064.856281@anthem.wooz.org>
	<3aae83f7.41314216@smtp.worldonline.dk>
	<15022.34452.183052.362184@anthem.wooz.org>
	<000b01c0ac1d$ad79bec0$e46940d5@hagrid>
	<3aaf5aa5.8357597@smtp.worldonline.dk>
	<00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
Message-ID: <15023.32621.173685.834783@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "FL" == Fredrik Lundh <fredrik at pythonware.com> writes:

  FL> finn wrote:
  >> >>>> spam(class=1)
  >> >{'class': 1}
  >> >>>> spam(print=1)
  >> >{'print': 1}
  >>
  >> Exactly.

  FL> how hard would it be to fix this in CPython?  can it be done in
  FL> time for 2.1?  (Thomas?)

Only if he can use the time machine to slip it in before 2.1b1.

Jeremy



From gmcm at hypernet.com  Wed Mar 14 16:08:16 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Wed, 14 Mar 2001 10:08:16 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.50867.210827.597710@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE9E3F.9635.7CE46C9C@localhost>
Message-ID: <3AAF4310.26204.7F683B24@localhost>

[Jeremy]
> >>>>> "GMcM" == Gordon McMillan <gmcm at hypernet.com> writes:
> 
>   >> Is this the semantic difference between Stackless and
>   CPython >> that people are getting all in a lather about?
> 
>   GMcM> What semantic difference? You can't transfer control to a
>   GMcM> coroutine / urthread in a magic method in CPython, either
>   GMcM> <wink>.
> 
> If I have a library or class that uses threads under the covers,
> I can create the threads in whatever code block I want,
> regardless of what is on the call stack above the block.  The
> reason that coroutines / uthreads are different is that the
> semantics of control transfers are tied to what the call stack
> looks like a) when the thread is created and b) when a control
> transfer is attempted.

Just b) I think.
 
> This restriction seems quite at odds with modularity.  (Could I
> import a module that creates a thread within an __init__ method?)
>  The correctness of a library or class depends on the entire call
> chain involved in its use.

Coroutines are not threads, nor are uthreads. Threads are 
used for comparison purposes because for most people, they 
are the only model for transfers of control outside regular call / 
return. My first serious programming language was IBM 
assembler which, at the time, did not have call / return. That 
was one of about 5 common patterns used. So I don't suffer 
from the illusion that call / return is the only way to do things.

In some ways threads make a lousy model for what's going 
on. They are OS level things. If you were able, on your first 
introduction to threads, to immediately fit them into your 
concept of "modularity", then you are truly unique. They are 
antithetical to my notion of modularity.

If you have another model outside threads and call / return, 
trot it out. It's sure to be a fresher horse than this one.
 
> It's not at all modular, because a programmer could make a local
> decision about organizing a particular module and cause errors in
> a module that don't even use directly.  This would occur if
> module A uses uthreads, module B is a client of module A, and the
> user writes a program that uses module B.  He unsuspectingly adds
> a call to module A in an __init__ method and *boom*.

You will find this enormously more difficult to demonstrate 
than assert. Module A does something in the background. 
Therefor module B does something in the background. There 
is no technique for backgrounding processing which does not 
have some implications for the user of module B. If modules A 
and or B are poorly coded, it will have obvious implications for 
the user.

> "Python is a language in which the use of uthreads in a module
> you didn't know existed can render your own program unusable." 
> <wink>

Your arguments are all based on rather fantastical notions of 
evil module writers pulling dirty tricks on clueless innocent 
programmers. In fact, they're based on the idea that the 
programmer was successfully using module AA, then 
switched to using A (which must have been advertised as a 
drop in replacement) and then found that they went "boom" in 
an __init__ method that used to work. Python today has no 
shortage of ways in which evil module writers can cause 
misery for programmers. Stackless does not claim that 
module writers claiming full compatiblity are telling the truth. If 
module A does not suit your needs, go back to module AA.

Obviously, those of us who like Stackless would be delighted 
to have all interpreter recursions removed. It's also obvious 
where your rhetorical argument is headed: Stackless is 
dangerous unless all interpreter recursions are eliminated; it's 
too much work to remove all interpreter recursions until Py4K; 
please reassign this PEP a nineteen digit number.

and-there-is-NO-truth-to-the-rumor-that-stackless-users
-eat-human-flesh-<munch, munch>-ly y'rs

- Gordon



From tismer at tismer.com  Wed Mar 14 16:23:38 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 16:23:38 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <3AAE9E3F.9635.7CE46C9C@localhost> <3AAF4310.26204.7F683B24@localhost>
Message-ID: <3AAF8CFA.58A9A68B@tismer.com>


Gordon McMillan wrote:
> 
> [Jeremy]

<big snip/>

> Obviously, those of us who like Stackless would be delighted
> to have all interpreter recursions removed. It's also obvious
> where your rhetorical argument is headed: Stackless is
> dangerous unless all interpreter recursions are eliminated; it's
> too much work to remove all interpreter recursions until Py4K;
> please reassign this PEP a nineteen digit number.

Of course we would like to see all recursions vanish.
Unfortunately this would make Python's current codebase
vanish almost completely, too, which would be bad. :)

That's the reason to have Stack Lite.

The funny observation after following this thread:
It appears that Stack Lite is in fact best suited for
Microthreads, better than for coroutines.

Reason: Microthreads schedule automatically, when it is allowed.
By normal use, it gives you no trouble to spawn an uthread
from any extension, since the scheduling is done by the
interpreter in charge only if it is active, after all nested
calls have been done.

Hence, Stack Lite gives us *all* of uthreads, and almost all of
generators and coroutines, except for the mentioned cases.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From guido at digicool.com  Wed Mar 14 16:26:23 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 10:26:23 -0500
Subject: [Python-Dev] Kinds
In-Reply-To: Your message of "Tue, 13 Mar 2001 08:38:35 PST."
             <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com> 
References: <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com> 
Message-ID: <200103141526.KAA04151@cj20424-a.reston1.va.home.com>

I liked Paul's brief explanation of Kinds.  Maybe we could make it so
that there's a special Kind representing bignums, and eventually that
could become the default (as part of the int unification).  Then
everybody can have it their way.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Wed Mar 14 16:33:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 10:33:50 -0500
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: Your message of "Tue, 13 Mar 2001 19:08:05 +0100."
             <20010313190805.C404@xs4all.nl> 
References: <E14ciAp-0005dJ-00@darjeeling> <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>  
            <20010313190805.C404@xs4all.nl> 
Message-ID: <200103141533.KAA04216@cj20424-a.reston1.va.home.com>

> I think the main reason for
> separate lists is to allow non-python-dev-ers easy access to the lists. 

Yes, this is the main reason.

I like it, it keeps my inbox separated out.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pedroni at inf.ethz.ch  Wed Mar 14 16:41:03 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Wed, 14 Mar 2001 16:41:03 +0100 (MET)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
Message-ID: <200103141541.QAA03543@core.inf.ethz.ch>

Hi.

First of all I should admit I ignore what have been discussed
at IPC9 about Stackless Python.

My plain question (as jython developer): is there a real intention
to make python stackless in the short term (2.2, 2.3...)
?

AFAIK then for jython there are three option:
1 - Just don't care
2 - A major rewrite with performance issues (but AFAIK nobody has
  the resources for doing that)
3 - try to implement some of the highlevel offered features through threads
   (which could be pointless from a performance point of view:
     e.g. microthreads trough threads, not that nice).
     
The option are 3 just for the theoretical sake of compatibility 
(I don't see the point to port python stackless based code to jython)
 or 1 plus some amount of frustration <wink>. Am I missing something?

The problem will be more serious if the std lib will begin to use
heavily the stackless features.


regards, Samuele Pedroni.




From barry at digicool.com  Wed Mar 14 17:06:57 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 14 Mar 2001 11:06:57 -0500
Subject: [Python-Dev] OT: careful with that perl code
References: <011601c0ac8f$8cb66b80$0900a8c0@SPIFF>
Message-ID: <15023.38689.298294.736516@anthem.wooz.org>

>>>>> "FL" == Fredrik Lundh <fredrik at pythonware.com> writes:

    FL> http://slashdot.org/article.pl?sid=01/03/13/208259&mode=nocomment

    FL>     "because he wasn't familiar with the distinction between
    FL> perl's scalar and list context, S. now has a police record"

If it's true, I don't know what about that article scares / depresses me more.

born-in-the-usa-ly y'rs,
-Barry



From aycock at csc.UVic.CA  Wed Mar 14 19:02:43 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Wed, 14 Mar 2001 10:02:43 -0800
Subject: [Python-Dev] CML2 compiler speedup
Message-ID: <200103141802.KAA02907@valdes.csc.UVic.CA>

| talking about performance, has anyone played with using SRE's
| lastindex/lastgroup stuff with SPARK?

Not yet.  I will defer to Tim's informed opinion on this.

| (is there anything else I could do in SRE to make SPARK run faster?)

Well, if I'm wishing..  :-)

I would like all the parts of an alternation A|B|C to be searched for
at the same time (my assumption is that they aren't currently).  And
I'd also love a flag that would disable "first then longest" semantics
in favor of always taking the longest match.

John



From thomas at xs4all.net  Wed Mar 14 19:36:17 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 14 Mar 2001 19:36:17 +0100
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <200103141802.KAA02907@valdes.csc.UVic.CA>; from aycock@csc.UVic.CA on Wed, Mar 14, 2001 at 10:02:43AM -0800
References: <200103141802.KAA02907@valdes.csc.UVic.CA>
Message-ID: <20010314193617.F404@xs4all.nl>

On Wed, Mar 14, 2001 at 10:02:43AM -0800, John Aycock wrote:

> I would like all the parts of an alternation A|B|C to be searched for
> at the same time (my assumption is that they aren't currently).  And
> I'd also love a flag that would disable "first then longest" semantics
> in favor of always taking the longest match.

While on that subject.... Is there an easy way to get all the occurances of
a repeating group ? I wanted to do something like 'foo(bar|baz)+' and be
able to retrieve all matches of the group. I fixed it differently now, but I
kept wondering why that wasn't possible.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at golux.thyrsus.com  Tue Mar 13 23:17:42 2001
From: esr at golux.thyrsus.com (Eric)
Date: Tue, 13 Mar 2001 14:17:42 -0800
Subject: [Python-Dev] freeze is broken in 2.x
Message-ID: <E14cx6s-0002zN-00@golux.thyrsus.com>

It appears that the freeze tools are completely broken in 2.x.  This 
is rather unfortunate, as I was hoping to use them to end-run some
objections to CML2 and thereby get python into the Linux kernel tree.

I have fixed some obvious errors (use of the deprecated 'cmp' module;
use of regex) but I have encountered run-time errors that are beyond
my competence to fix.  From a cursory inspection of the code it looks
to me like the freeze tools need adaptation to the new
distutils-centric build process.

Do these tools have a maintainer?  They need some serious work.
--
							>>esr>>



From thomas.heller at ion-tof.com  Wed Mar 14 22:23:39 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Wed, 14 Mar 2001 22:23:39 +0100
Subject: [Python-Dev] freeze is broken in 2.x
References: <E14cx6s-0002zN-00@golux.thyrsus.com>
Message-ID: <05fd01c0accd$0a1dc450$e000a8c0@thomasnotebook>

> It appears that the freeze tools are completely broken in 2.x.  This 
> is rather unfortunate, as I was hoping to use them to end-run some
> objections to CML2 and thereby get python into the Linux kernel tree.
> 
> I have fixed some obvious errors (use of the deprecated 'cmp' module;
> use of regex) but I have encountered run-time errors that are beyond
> my competence to fix.  From a cursory inspection of the code it looks
> to me like the freeze tools need adaptation to the new
> distutils-centric build process.

I have some ideas about merging freeze into distutils, but this is
nothing which could be implemented for 2.1.

> 
> Do these tools have a maintainer?  They need some serious work.

At least they seem to have users.

Thomas




From esr at golux.thyrsus.com  Wed Mar 14 22:37:10 2001
From: esr at golux.thyrsus.com (Eric)
Date: Wed, 14 Mar 2001 13:37:10 -0800
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>; from tim.one@home.com on Wed, Mar 14, 2001 at 02:27:21AM -0500
References: <200103131532.f2DFWpw04691@snark.thyrsus.com> <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>
Message-ID: <20010314133710.J2046@thyrsus.com>

Tim Peters <tim.one at home.com>:
> If all you got out of crafting a one-grammar parser by hand is a measly
> factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> parser generators for restricted grammars, in C).  For the all-purpose Earley
> parser to get that close is really quite an accomplishment!  SPARK was
> written primarily for rapid prototyping, at which it excels (how many times
> did you change your grammar during development?  how much longer would it
> have taken you to adjust had you needed to rework your RD parser each time?).

SPARK is indeed a wonderful prototyping tool, and I admire John Aycock for
producing it (though he really needs to do better on the documentation).

Unfortunately, Michael Elizabeth Chastain pointed out that it imposes a
bad startup delay in some important cases of CML2 usage.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Americans have the will to resist because you have weapons. 
If you don't have a gun, freedom of speech has no power.
         -- Yoshimi Ishikawa, Japanese author, in the LA Times 15 Oct 1992



From esr at golux.thyrsus.com  Wed Mar 14 22:38:14 2001
From: esr at golux.thyrsus.com (Eric)
Date: Wed, 14 Mar 2001 13:38:14 -0800
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <014401c0ac60$4f0b1c60$e46940d5@hagrid>; from fredrik@pythonware.com on Wed, Mar 14, 2001 at 09:25:19AM +0100
References: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com> <014401c0ac60$4f0b1c60$e46940d5@hagrid>
Message-ID: <20010314133814.K2046@thyrsus.com>

Fredrik Lundh <fredrik at pythonware.com>:
> tim wrote:
> > If all you got out of crafting a one-grammar parser by hand is a measly
> > factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> > parser generators for restricted grammars, in C).
> 
> talking about performance, has anyone played with using SRE's
> lastindex/lastgroup stuff with SPARK?
> 
> (is there anything else I could do in SRE to make SPARK run faster?)

Wouldn't help me, I wasn't using the SPARK scanner.  The overhead really
was in the parsing.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Gun Control: The theory that a woman found dead in an alley, raped and
strangled with her panty hose, is somehow morally superior to a
woman explaining to police how her attacker got that fatal bullet wound.
	-- L. Neil Smith



From guido at digicool.com  Thu Mar 15 00:05:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 18:05:50 -0500
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: Your message of "Tue, 13 Mar 2001 14:17:42 PST."
             <E14cx6s-0002zN-00@golux.thyrsus.com> 
References: <E14cx6s-0002zN-00@golux.thyrsus.com> 
Message-ID: <200103142305.SAA05872@cj20424-a.reston1.va.home.com>

> It appears that the freeze tools are completely broken in 2.x.  This 
> is rather unfortunate, as I was hoping to use them to end-run some
> objections to CML2 and thereby get python into the Linux kernel tree.
> 
> I have fixed some obvious errors (use of the deprecated 'cmp' module;
> use of regex) but I have encountered run-time errors that are beyond
> my competence to fix.  From a cursory inspection of the code it looks
> to me like the freeze tools need adaptation to the new
> distutils-centric build process.
> 
> Do these tools have a maintainer?  They need some serious work.

The last maintainers were me and Mark Hammond, but neither of us has
time to look into this right now.  (At least I know I don't.)

What kind of errors do you encounter?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Thu Mar 15 01:28:15 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 19:28:15 -0500
Subject: [Python-Dev] 2.1b2 next Friday?
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGJFAA.tim.one@home.com>

We need another beta release (according to me).  Anyone disagree?

If not, let's pump it out next Friday, 23-Mar-2001.  That leaves 3 weeks for
intense final testing before 2.1 final (which PEP 226 has scheduled for
13-Apr-2001).




From greg at cosc.canterbury.ac.nz  Thu Mar 15 01:31:00 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 13:31:00 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAF3C45.1972981F@tismer.com>
Message-ID: <200103150031.NAA05310@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer at tismer.com>:

> You can *create* a thread using a callback.

Okay, that's not so bad. (An earlier message seemed to
be saying that you couldn't even do that.)

But what about GUIs such as Tkinter which have a
main loop in C that keeps control for the life of
the program? You'll never get back to the base-level
interpreter, not even between callbacks, so how do 
the uthreads get scheduled?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Mar 15 01:47:12 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 13:47:12 +1300 (NZDT)
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEJIJFAA.tim.one@home.com>
Message-ID: <200103150047.NAA05314@s454.cosc.canterbury.ac.nz>

Maybe Python should use decimal FP as the *default* representation
for fractional numbers, with binary FP available as an option for
those who really want it.

Unadorned FP literals would give you decimal FP, as would float().
There would be another syntax for binary FP literals (e.g. a 'b'
suffix) and a bfloat() function.

My first thought was that binary FP literals should have to be
written in hex or octal. ("You want binary FP? Then you can jolly
well learn to THINK in it!") But that might be a little extreme.

By the way, what if CPU designers started providing decimal FP 
in hardware? Could scientists and ordinary mortals then share the
same FP system and be happe? The only disadvantage I can think of 
for the scientists is that a bit more memory would be required, but
memory is cheap nowadays. Are there any other drawbacks that
I haven't thought of?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Thu Mar 15 03:01:50 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 21:01:50 -0500
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <200103150047.NAA05314@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMIJFAA.tim.one@home.com>

[Greg Ewing]
> Maybe Python should use decimal FP as the *default* representation
> for fractional numbers, with binary FP available as an option for
> those who really want it.

NumPy users would scream bloody murder.

> Unadorned FP literals would give you decimal FP, as would float().
> There would be another syntax for binary FP literals (e.g. a 'b'
> suffix) and a bfloat() function.

Ditto.

> My first thought was that binary FP literals should have to be
> written in hex or octal. ("You want binary FP? Then you can jolly
> well learn to THINK in it!") But that might be a little extreme.

"A little"?  Yes <wink>.  Note that C99 introduces hex fp notation, though,
as it's the only way to be sure you're getting the bits you need (when it
really matters, as it can, e.g., in accurate implementations of math
libraries).

> By the way, what if CPU designers started providing decimal FP
> in hardware? Could scientists and ordinary mortals then share the
> same FP system and be happe?

Sure!  Countless happy users of scientific calculators are evidence of
that -- virtually all calculators use decimal fp, for the obvious human
factors reasons ("obvious", I guess, to everyone except most post-1960's
language designers <wink>).

> The only disadvantage I can think of for the scientists is that a
> bit more memory would be required, but memory is cheap nowadays. Are
> there any other drawbacks that I haven't thought of?

See the Kahan paper I referenced yesterday (also the FAQ mentioned below).
He discusses it briefly.  Base 10 HW fp has small additional speed costs, and
makes error analysis a bit harder (at the boundaries where an exponent goes
up, the gaps between representable fp numbers are larger the larger the
base -- in a sense, e.g., whenever a decimal fp number ends with 5, it's
"wasting" a couple bits of potential precision; in that sense, binary fp is
provably optimal).


Mike Cowlishaw (REXX's father) is currently working hard in this area:

    http://www2.hursley.ibm.com/decimal/

That's an excellent resource for people curious about decimal fp.

REXX has many users in financial and commerical fields, where binary fp is a
nightmare to live with (BTW, REXX does use decimal fp).  An IBM study
referenced in the FAQ found that less than 2% of the numeric fields in
commercial databases contained data of a binary float type; more than half
used the database's form of decimal fp; the rest were of integer types.  It's
reasonable to speculate that much of the binary fp data was being used simply
because it was outside the dynamic range of the database's decimal fp type --
in which case even the tiny "< 2%" is an overstatement.

Maybe 5 years ago I asked Cowlishaw whether Python could "borrow" REXX's
software decimal fp routines.  He said sure.  Ironically, I had more time to
pursue it then than I have now ...

less-than-zero-in-an-unsigned-type-ly y'rs  - tim




From greg at cosc.canterbury.ac.nz  Thu Mar 15 05:02:24 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 17:02:24 +1300 (NZDT)
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEMIJFAA.tim.one@home.com>
Message-ID: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz>

Tim Peters <tim.one at home.com>:

> NumPy users would scream bloody murder.

It would probably be okay for NumPy to use binary FP by default.
If you're using NumPy, you're probably a scientist or mathematician
already and are aware of the issues.

The same goes for any other extension module designed for
specialist uses, e.g. 3D graphics.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From aahz at panix.com  Thu Mar 15 07:14:54 2001
From: aahz at panix.com (aahz at panix.com)
Date: Thu, 15 Mar 2001 01:14:54 -0500 (EST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
Message-ID: <200103150614.BAA04221@panix6.panix.com>

[posted to c.l.py.announce and c.l.py; followups to c.l.py; cc'd to
python-dev]

Okay, folks, here it is, the first draft of the spec for creating Python
maintenance releases.  Note that I'm not on python-dev, so it's probably
better to have the discussion on c.l.py if possible.

            PEP: 6
          Title: Patch and Bug Fix Releases
        Version: $Revision: 1.1 $
         Author: aahz at pobox.com (Aahz)
         Status: Draft
           Type: Informational
        Created: 15-Mar-2001
   Post-History:
     _________________________________________________________________
   
  Abstract
  
    Python has historically had only a single fork of development,
    with releases having the combined purpose of adding new features
    and delivering bug fixes (these kinds of releases will be referred
    to as "feature releases").  This PEP describes how to fork off
    patch releases of old versions for the primary purpose of fixing
    bugs.

    This PEP is not, repeat NOT, a guarantee of the existence of patch
    releases; it only specifies a procedure to be followed if patch
    releases are desired by enough of the Python community willing to
    do the work.


  Motivation
  
    With the move to SourceForge, Python development has accelerated.
    There is a sentiment among part of the community that there was
    too much acceleration, and many people are uncomfortable with
    upgrading to new versions to get bug fixes when so many features
    have been added, sometimes late in the development cycle.

    One solution for this issue is to maintain old feature releases,
    providing bug fixes and (minimal!) feature additions.  This will
    make Python more attractive for enterprise development, where
    Python may need to be installed on hundreds or thousands of
    machines.

    At the same time, many of the core Python developers are
    understandably reluctant to devote a significant fraction of their
    time and energy to what they perceive as grunt work.  On the
    gripping hand, people are likely to feel discomfort around
    installing releases that are not certified by PythonLabs.


  Prohibitions
  
    Patch releases are required to adhere to the following
    restrictions:

    1. There must be zero syntax changes.  All .pyc and .pyo files
       must work (no regeneration needed) with all patch releases
       forked off from a feature release.

    2. There must be no incompatible C API changes.  All extensions
       must continue to work without recompiling in all patch releases
       in the same fork as a feature release.


  Bug Fix Releases
  
    Bug fix releases are a subset of all patch releases; it is
    prohibited to add any features to the core in a bug fix release.
    A patch release that is not a bug fix release may contain minor
    feature enhancements, subject to the Prohibitions section.

    The standard for patches to extensions and modules is a bit more
    lenient, to account for the possible desirability of including a
    module from a future version that contains mostly bug fixes but
    may also have some small feature changes.  (E.g. Fredrik Lundh
    making available the 2.1 sre module for 2.0 and 1.5.2.)


  Version Numbers
  
    Starting with Python 2.0, all feature releases are required to
    have the form X.Y; patch releases will always be of the form
    X.Y.Z.  To clarify the distinction between a bug fix release and a
    patch release, all non-bug fix patch releases will have the suffix
    "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
    bug fix release; and "2.1.2p" is a patch release that contains
    minor feature enhancements.


  Procedure
  
    XXX This section is still a little light (and probably
    controversial!)

    The Patch Czar is the counterpart to the BDFL for patch releases.
    However, the BDFL and designated appointees retain veto power over
    individual patches and the decision of whether to label a patch
    release as a bug fix release.

    As individual patches get contributed to the feature release fork,
    each patch contributor is requested to consider whether the patch
    is a bug fix suitable for inclusion in a patch release.  If the
    patch is considered suitable, the patch contributor will mail the
    SourceForge patch (bug fix?) number to the maintainers' mailing
    list.

    In addition, anyone from the Python community is free to suggest
    patches for inclusion.  Patches may be submitted specifically for
    patch releases; they should follow the guidelines in PEP 3[1].

    The Patch Czar decides when there are a sufficient number of
    patches to warrant a release.  The release gets packaged up,
    including a Windows installer, and made public as a beta release.
    If any new bugs are found, they must be fixed and a new beta
    release publicized.  Once a beta cycle completes with no new bugs
    found, the package is sent to PythonLabs for certification and
    publication on python.org.

    Each beta cycle must last a minimum of one month.


  Issues To Be Resolved
  
    Should the first patch release following any feature release be
    required to be a bug fix release?  (Aahz proposes "yes".)

    Is it allowed to do multiple forks (e.g. is it permitted to have
    both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)

    Does it makes sense for a bug fix release to follow a patch
    release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)

    Exactly how does a candidate patch release get submitted to
    PythonLabs for certification?  And what does "certification" mean,
    anyway?  ;-)

    Who is the Patch Czar?  Is the Patch Czar a single person?  (Aahz
    says "not me alone".  Aahz is willing to do a lot of the
    non-technical work, but Aahz is not a C programmer.)

    What is the equivalent of python-dev for people who are
    responsible for maintaining Python?  (Aahz proposes either
    python-patch or python-maint, hosted at either python.org or
    xs4all.net.)

    Does SourceForge make it possible to maintain both separate and
    combined bug lists for multiple forks?  If not, how do we mark
    bugs fixed in different forks?  (Simplest is to simply generate a
    new bug for each fork that it gets fixed in, referring back to the
    main bug number for details.)


  References
  
    [1] PEP 3, Hylton, http://python.sourceforge.net/peps/pep-0003.html


  Copyright
  
    This document has been placed in the public domain.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"The overexamined life sure is boring."  --Loyal Mini Onion



From tismer at tismer.com  Thu Mar 15 12:30:09 2001
From: tismer at tismer.com (Christian Tismer)
Date: Thu, 15 Mar 2001 12:30:09 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103141541.QAA03543@core.inf.ethz.ch>
Message-ID: <3AB0A7C1.B86E63F2@tismer.com>


Samuele Pedroni wrote:
> 
> Hi.
> 
> First of all I should admit I ignore what have been discussed
> at IPC9 about Stackless Python.

This would have answered your question.

> My plain question (as jython developer): is there a real intention
> to make python stackless in the short term (2.2, 2.3...)

Yes.

> AFAIK then for jython there are three option:
> 1 - Just don't care
> 2 - A major rewrite with performance issues (but AFAIK nobody has
>   the resources for doing that)
> 3 - try to implement some of the highlevel offered features through threads
>    (which could be pointless from a performance point of view:
>      e.g. microthreads trough threads, not that nice).
> 
> The option are 3 just for the theoretical sake of compatibility
> (I don't see the point to port python stackless based code to jython)
>  or 1 plus some amount of frustration <wink>. Am I missing something?
> 
> The problem will be more serious if the std lib will begin to use
> heavily the stackless features.

Option 1 would be even fine with me. I would make all
Stackless features optional, not enforcing them for the
language.

Option 2 doesn't look reasonable. We cannot switch
microthreads without changing the VM. In CPython,
the VM is available, in Jython it is immutable.
The only way I would see is to turn Jython into
an interpreter instead of producing VM code. That
would do, but at an immense performance cost.

Option 3 is Guido's view of a compatibility layer.
Microthreads can be simulated by threads in fact.
This is slow, but compatible, making stuff just work.
Most probably this version is performing better than
option 2.

I don't believe that the library will become a problem,
if modifications are made with Jython in mind.

Personally, I'm not convinced that any of these will make
Jython users happy. The concurrency domain will in
fact be dominated by CPython, since one of the best
features of Uthreads is incredible speed and small size.
But this is similar to a couple of extensions for CPython
which are just not available for Jython.

I tried hard to find out how to make Jython Stackless.
There was no way yet, I'm very very sorry!
On the other hand I don't think
that Jython should play the showstopper for a technology
that people really want. Including the stackless machinery
into Python without enforcing it would be my way.
Parallel stuff can sit in an extension module.
Of course there will be a split of modules which don't
work in Jython, or which are less efficient in Jython.
But if efficiency is the demand, Jython wouldn't be
the right choice, anyway.

regards - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From guido at digicool.com  Thu Mar 15 12:55:56 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 06:55:56 -0500
Subject: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Thu, 15 Mar 2001 17:02:24 +1300."
             <200103150402.RAA05333@s454.cosc.canterbury.ac.nz> 
References: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103151155.GAA07429@cj20424-a.reston1.va.home.com>

I'll say one thing and then I'll try to keep my peace about this.

I think that using rationals as the default type for
decimal-with-floating-point notation won't fly.  There are too many
issues, e.g. performance, rounding on display, usability for advanced
users, backwards compatibility.  This means that it just isn't
possible to get a consensus about moving in this direction.

Using decimal floating point won't fly either, for mostly the same
reasons, plus the implementation appears to be riddled with gotcha's
(at least rationals are relatively clean and easy to implement, given
that we already have bignums).

I don't think I have the time or energy to argue this much further --
someone will have to argue until they have a solution that the various
groups (educators, scientists, and programmers) can agree on.  Maybe
language levels will save the world?

That leaves three topics as potential low-hanging fruit:

- Integer unification (PEP 237).  It's mostly agreed that plain ints
  and long ints should be unified.  Simply creating a long where we
  currently overflow would be the easiest route; it has some problems
  (it's not 100% seamless) but I think it's usable and I see no real
  disadvantages.

- Number unification.  This is more controversial, but I believe less
  so than rationals or decimal f.p.  It would remove all semantic
  differences between "1" and "1.0", and therefore 1/2 would return
  0.5.  The latter is separately discussed in PEP 238, but I now
  believe this should only be done as part of a general unification.
  Given my position on decimal f.p. and rationals, this would mean an
  approximate, binary f.p. result for 1/3, and this does not seem to
  have the support of the educators (e.g. Jeff Elkner is strongly
  opposed to teaching floats at all).  But other educators (e.g. Randy
  Pausch, and the folks who did VPython) strongly recommend this based
  on user observation, so there's hope.  As a programmer, as long as
  there's *some* way to spell integer division (even div(i, j) will
  do), I don't mind.  The breakage of existig code will be great so
  we'll be forced to introduce this gradually using a future_statement
  and warnings.

- "Kinds", as proposed by Paul Dubois.  This doesn't break existing
  code or change existing semantics, it just adds more control for
  those who want it.  I think this might just work.  Will someone
  kindly help Paul get this in PEP form?

PS.  Moshe, please check in your PEPs.  They need to be on-line.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tismer at tismer.com  Thu Mar 15 13:41:07 2001
From: tismer at tismer.com (Christian Tismer)
Date: Thu, 15 Mar 2001 13:41:07 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103150031.NAA05310@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB0B863.52DFB61C@tismer.com>


Greg Ewing wrote:
> 
> Christian Tismer <tismer at tismer.com>:
> 
> > You can *create* a thread using a callback.
> 
> Okay, that's not so bad. (An earlier message seemed to
> be saying that you couldn't even do that.)
> 
> But what about GUIs such as Tkinter which have a
> main loop in C that keeps control for the life of
> the program? You'll never get back to the base-level
> interpreter, not even between callbacks, so how do
> the uthreads get scheduled?

This would not work. One simple thing I could think of is
to let the GUI live in an OS thread, and have another
thread for all the microthreads.
More difficult but maybe better: A C main loop which
doesn't run an interpreter will block otherwise. But
most probably, it will run interpreters from time to time.
These can be told to take the scheduling role on.
It does not matter on which interpreter level we are,
we just can't switch to frames of other levels. But
even leaving a frame chain, and re-entering later
with a different stack level is no problem.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From paulp at ActiveState.com  Thu Mar 15 14:30:52 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Thu, 15 Mar 2001 05:30:52 -0800
Subject: [Python-Dev] Before it was called Stackless....
Message-ID: <3AB0C40C.54CAA328@ActiveState.com>

http://www.python.org/workshops/1995-05/WIP.html

I found Guido's "todo list" from 1995. 

	Move the C stack out of the way 

It may be possible to implement Python-to-Python function and method
calls without pushing a C stack frame. This has several advantages -- it
could be more efficient, it may be possible to save and restore the
Python stack to enable migrating programs, and it may be possible to
implement multiple threads without OS specific support (the latter is
questionable however, since it would require a solution for all blocking
system calls). 



-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From tim.one at home.com  Thu Mar 15 16:31:57 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 15 Mar 2001 10:31:57 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103151155.GAA07429@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>

[Guido]
> I'll say one thing and then I'll try to keep my peace about this.

If this was one thing, you're suffering major roundoff error <wink>.

> I think that using rationals as the default type for
> decimal-with-floating-point notation won't fly.  There are too many
> issues, e.g. performance, rounding on display, usability for advanced
> users, backwards compatibility.  This means that it just isn't
> possible to get a consensus about moving in this direction.

Agreed.

> Using decimal floating point won't fly either,

If you again mean "by default", also agreed.

> for mostly the same reasons, plus the implementation appears to
> be riddled with gotcha's

It's exactly as difficult or easy as implementing binary fp in software; see
yesterday's link to Cowlishaw's work for detailed pointers; and as I said
before, Cowlishaw earlier agreed (years ago) to let Python use REXX's
implementation code.

> (at least rationals are relatively clean and easy to implement, given
> that we already have bignums).

Oddly enough, I believe rationals are more code in the end (e.g., my own
Rational package is about 3000 lines of Python, but indeed is so general it
subsumes IEEE 854 (the decimal variant of IEEE 754) except for Infs and
NaNs) -- after you add rounding facilities to Rationals, they're as hairy as
decimal fp.

> I don't think I have the time or energy to argue this much further --
> someone will have to argue until they have a solution that the various
> groups (educators, scientists, and programmers) can agree on.  Maybe
> language levels will save the world?

A per-module directive specifying the default interpretation of fp literals
within the module is an ugly but workable possibility.

> That leaves three topics as potential low-hanging fruit:
>
> - Integer unification (PEP 237).  It's mostly agreed that plain ints
>   and long ints should be unified.  Simply creating a long where we
>   currently overflow would be the easiest route; it has some problems
>   (it's not 100% seamless) but I think it's usable and I see no real
>   disadvantages.

Good!

> - Number unification.  This is more controversial, but I believe less
>   so than rationals or decimal f.p.  It would remove all semantic
>   differences between "1" and "1.0", and therefore 1/2 would return
>   0.5.

The only "number unification" PEP on the table does not remove all semantic
differences:  1.0 is tagged as inexact under Moshe's PEP, but 1 is not.  So
this is some other meaning for unification.  Trying to be clear.

>   The latter is separately discussed in PEP 238, but I now believe
>   this should only be done as part of a general unification.
>   Given my position on decimal f.p. and rationals, this would mean an
>   approximate, binary f.p. result for 1/3, and this does not seem to
>   have the support of the educators (e.g. Jeff Elkner is strongly
>   opposed to teaching floats at all).

I think you'd have a very hard time finding any pre-college level teacher who
wants to teach binary fp.  Your ABC experience is consistent with that too.

>  But other educators (e.g. Randy Pausch, and the folks who did
> VPython) strongly recommend this based on user observation, so there's
> hope.

Alice is a red herring!  What they wanted was for 1/2 *not* to mean 0.  I've
read the papers and dissertations too -- there was no plea for binary fp in
those, just that division not throw away info.  The strongest you can claim
using these projects as evidence is that binary fp would be *adequate* for a
newbie graphics application.  And I'd agree with that.  But graphics is a
small corner of education, and either rationals or decimal fp would also be
adequate for newbie graphics.

>   As a programmer, as long as there's *some* way to spell integer
>   division (even div(i, j) will do), I don't mind.

Yes, I need that too.

>   The breakage of existig code will be great so we'll be forced to
>   introduce this gradually using a future_statement and warnings.
>
> - "Kinds", as proposed by Paul Dubois.  This doesn't break existing
>   code or change existing semantics, it just adds more control for
>   those who want it.  I think this might just work.  Will someone
>   kindly help Paul get this in PEP form?

I will.

> PS.  Moshe, please check in your PEPs.  They need to be on-line.

Absolutely.




From pedroni at inf.ethz.ch  Thu Mar 15 16:39:18 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 15 Mar 2001 16:39:18 +0100 (MET)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
Message-ID: <200103151539.QAA01573@core.inf.ethz.ch>

Hi.

[Christian Tismer]
> Samuele Pedroni wrote:
> > 
> > Hi.
> > 
> > First of all I should admit I ignore what have been discussed
> > at IPC9 about Stackless Python.
> 
> This would have answered your question.
> 
> > My plain question (as jython developer): is there a real intention
> > to make python stackless in the short term (2.2, 2.3...)
> 
> Yes.
Now I know <wink>.

 > > AFAIK then for jython there are three option:
> > 1 - Just don't care
> > 2 - A major rewrite with performance issues (but AFAIK nobody has
> >   the resources for doing that)
> > 3 - try to implement some of the highlevel offered features through threads
> >    (which could be pointless from a performance point of view:
> >      e.g. microthreads trough threads, not that nice).
> > 
> > The option are 3 just for the theoretical sake of compatibility
> > (I don't see the point to port python stackless based code to jython)
> >  or 1 plus some amount of frustration <wink>. Am I missing something?
> > 
> > The problem will be more serious if the std lib will begin to use
> > heavily the stackless features.
> 
> Option 1 would be even fine with me. I would make all
> Stackless features optional, not enforcing them for the
> language.
> Option 2 doesn't look reasonable. We cannot switch
> microthreads without changing the VM. In CPython,
> the VM is available, in Jython it is immutable.
> The only way I would see is to turn Jython into
> an interpreter instead of producing VM code. That
> would do, but at an immense performance cost.
To be honest each python method invocation take such a tour
in jython that maybe the cost would not be that much, but
we will loose the smooth java and jython integration and
the possibility of having jython applets...
so it is a no-go and nobody has time for doing that.

> 
> Option 3 is Guido's view of a compatibility layer.
> Microthreads can be simulated by threads in fact.
> This is slow, but compatible, making stuff just work.
> Most probably this version is performing better than
> option 2.
On the long run that could find a natural solution, at least
wrt to uthreads, java is having some success on the server side,
and there is some ongoing research on writing jvms with their
own scheduled lightweight threads, such that a larger amount
of threads can be handled in a smoother way.

> I don't believe that the library will become a problem,
> if modifications are made with Jython in mind.
I was thinking about stuff like generators used everywhere,
but that is maybe just uninformed panicing. They are the
kind of stuff that make programmers addictive <wink>.

> 
> Personally, I'm not convinced that any of these will make
> Jython users happy. 
If they will not be informed, they just won't care <wink>

> I tried hard to find out how to make Jython Stackless.
> There was no way yet, I'm very very sorry!
You were trying something impossible <wink>,
the smooth integration with java is the big win of jython,
there is no way of making it stackless and preserving that.

> On the other hand I don't think
> that Jython should play the showstopper for a technology
> that people really want. 
Fine for me.

> Including the stackless machinery
> into Python without enforcing it would be my way.
> Parallel stuff can sit in an extension module.
> Of course there will be a split of modules which don't
> work in Jython, or which are less efficient in Jython.
> But if efficiency is the demand, Jython wouldn't be
> the right choice, anyway.
And python without C isn't that either.
All the dynamic optimisation technology behind the jvm make it outperform
the pvm for things light tight loops, etc.
And jython can't exploit any of that, because python is too dynamic,
sometimes even in spurious ways.

In different ways they (java,python,... ) all are good approximations of the
Right Thing without being it, for different reasons.
(just a bit of personal frustration ;))

regards.




From guido at digicool.com  Thu Mar 15 16:42:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 10:42:32 -0500
Subject: [Python-Dev] Re: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Thu, 15 Mar 2001 10:31:57 EST."
             <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com> 
Message-ID: <200103151542.KAA09191@cj20424-a.reston1.va.home.com>

> I think you'd have a very hard time finding any pre-college level teacher who
> wants to teach binary fp.  Your ABC experience is consistent with that too.

"Want to", no.  But whether they're teaching Java, C++, or Pascal,
they have no choice: if they need 0.5, they'll need binary floating
point, whether they explain it adequately or not.  Possibly they are
all staying away from the decimal point completely, but I find that
hard to believe.

> >  But other educators (e.g. Randy Pausch, and the folks who did
> > VPython) strongly recommend this based on user observation, so there's
> > hope.
> 
> Alice is a red herring!  What they wanted was for 1/2 *not* to mean 0.  I've
> read the papers and dissertations too -- there was no plea for binary fp in
> those, just that division not throw away info.

I never said otherwise.  It just boils down to binary fp as the only
realistic choice.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Thu Mar 15 17:31:34 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 15 Mar 2001 17:31:34 +0100
Subject: [Python-Dev] Re: WYSIWYG decimal fractions
References: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz> <200103151155.GAA07429@cj20424-a.reston1.va.home.com>
Message-ID: <3AB0EE66.37E6C633@lemburg.com>

Guido van Rossum wrote:
> 
> I'll say one thing and then I'll try to keep my peace about this.
> 
> I think that using rationals as the default type for
> decimal-with-floating-point notation won't fly.  There are too many
> issues, e.g. performance, rounding on display, usability for advanced
> users, backwards compatibility.  This means that it just isn't
> possible to get a consensus about moving in this direction.
> 
> Using decimal floating point won't fly either, for mostly the same
> reasons, plus the implementation appears to be riddled with gotcha's
> (at least rationals are relatively clean and easy to implement, given
> that we already have bignums).
> 
> I don't think I have the time or energy to argue this much further --
> someone will have to argue until they have a solution that the various
> groups (educators, scientists, and programmers) can agree on.  Maybe
> language levels will save the world?

Just out of curiosity: is there a usable decimal type implementation
somewhere on the net which we could beat on ?

I for one would be very interested in having a decimal type
around (with fixed precision and scale), since databases rely
on these a lot and I would like to assure that passing database
data through Python doesn't cause any data loss due to rounding
issues.

If there aren't any such implementations yet, the site that Tim 
mentioned  looks like a good starting point for heading into this 
direction... e.g. for mx.Decimal ;-)

	http://www2.hursley.ibm.com/decimal/

I believe that now with the coercion patches in place, adding
new numeric datatypes should be fairly easy (left aside the
problems intrinsic to numerics themselves).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 17:30:49 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 17:30:49 +0100
Subject: [Python-Dev] Patch Manager Guidelines
Message-ID: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>

It appears that the Patch Manager Guidelines
(http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
tracker tool anymore. They claim that the status of the patch can be
Open, Accepted, Closed, etc - which is not true: the status can be
only Open, Closed, or Deleted; Accepted is a value of Resolution.

I have to following specific questions: If a patch is accepted, should
it be closed also? If so, how should the resolution change if it is
also committed?

Curious,
Martin



From fdrake at acm.org  Thu Mar 15 17:35:19 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Thu, 15 Mar 2001 11:35:19 -0500 (EST)
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
References: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
Message-ID: <15024.61255.797524.736810@localhost.localdomain>

Martin v. Loewis writes:
 > It appears that the Patch Manager Guidelines
 > (http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
 > tracker tool anymore. They claim that the status of the patch can be
 > Open, Accepted, Closed, etc - which is not true: the status can be
 > only Open, Closed, or Deleted; Accepted is a value of Resolution.

  Thanks for pointing this out!

 > I have to following specific questions: If a patch is accepted, should
 > it be closed also? If so, how should the resolution change if it is
 > also committed?

  I've been setting a patch to accepted-but-open if it needs to be
checked in, and then closing it once the checkin has been made.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Thu Mar 15 17:44:54 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 11:44:54 -0500
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: Your message of "Thu, 15 Mar 2001 17:30:49 +0100."
             <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de> 
References: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de> 
Message-ID: <200103151644.LAA09360@cj20424-a.reston1.va.home.com>

> It appears that the Patch Manager Guidelines
> (http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
> tracker tool anymore. They claim that the status of the patch can be
> Open, Accepted, Closed, etc - which is not true: the status can be
> only Open, Closed, or Deleted; Accepted is a value of Resolution.
> 
> I have to following specific questions: If a patch is accepted, should
> it be closed also? If so, how should the resolution change if it is
> also committed?

A patch should only be closed after it has been committed; otherwise
it's too easy to lose track of it.  So I guess the proper sequence is

1. accept; Resolution set to Accepted

2. commit; Status set to Closed

I hope the owner of the sf-faq document can fix it.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 18:22:41 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 18:22:41 +0100
Subject: [Python-Dev] Preparing 2.0.1
Message-ID: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>

I've committed a few changes to the 2.0 release branch, and I'd
propose to follow the following procedure when doing so:

- In the checkin message, indicate which file version from the
  mainline is being copied into the release branch.

- In Misc/NEWS, indicate what bugs have been fixed by installing these
  patches. If it was a patch in response to a SF bug report, listing
  the SF bug id should be sufficient; I've put some instructions into
  Misc/NEWS on how to retrieve the bug report for a bug id.

I'd also propose that 2.0.1, at a minimum, should contain the patches
listed on the 2.0 MoinMoin

http://www.python.org/cgi-bin/moinmoin

I've done so only for the _tkinter patch, which was both listed as
critical, and which closed 2 SF bug reports. I've verified that the
sre_parse patch also closes a number of SF bug reports, but have not
copied it to the release branch.

Please let me know what you think.

Martin



From guido at digicool.com  Thu Mar 15 18:39:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 12:39:32 -0500
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: Your message of "Thu, 15 Mar 2001 18:22:41 +0100."
             <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> 
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> 
Message-ID: <200103151739.MAA09627@cj20424-a.reston1.va.home.com>

Excellent, Martin!

There's way more by way of patches that we *could* add than the
MoinMoin Wiki though.

I hope that somebody has the time to wade through the 2.1 code to look
for gems.  These should all be *pure* bugfixes!

I haven't seen Aahz' PEP in detail yet; I don't hope there's a
requirement that 2.0.1 come out before 2.1?  The licensing stuff may
be holding 2.0.1 up. :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at effbot.org  Thu Mar 15 19:15:17 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Thu, 15 Mar 2001 19:15:17 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
Message-ID: <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid>

Martin wrote:
> I've verified that the sre_parse patch also closes a number of SF
> bug reports, but have not copied it to the release branch.

it's probably best to upgrade to the current SRE code base.

also, it would make sense to bump makeunicodedata.py to 1.8,
and regenerate the unicode database (this adds 38,642 missing
unicode characters).

I'll look into this this weekend, if I find the time.

Cheers /F




From mwh21 at cam.ac.uk  Thu Mar 15 19:28:48 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Thu, 15 Mar 2001 18:28:48 +0000 (GMT)
Subject: [Python-Dev] python-dev summary, 2001-03-01 - 2001-03-15
Message-ID: <Pine.LNX.4.10.10103151820200.24973-100000@localhost.localdomain>

 This is a summary of traffic on the python-dev mailing list between
 Mar 1 and Mar 14 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list at python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the third python-dev summary written by Michael Hudson.
 Previous summaries were written by Andrew Kuchling and can be found
 at:

   <http://www.amk.ca/python/dev/>

 New summaries will appear at:

  <http://starship.python.net/crew/mwh/summaries/>

 and will continue to be archived at Andrew's site.

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 264

    50 |                                             ]|[        
       |                                             ]|[        
       |                                             ]|[        
       |                                             ]|[        
    40 | ]|[                                         ]|[        
       | ]|[                                         ]|[        
       | ]|[                                         ]|[        
       | ]|[                                         ]|[ ]|[    
    30 | ]|[                                         ]|[ ]|[    
       | ]|[                                         ]|[ ]|[    
       | ]|[                                         ]|[ ]|[ ]|[
       | ]|[                                         ]|[ ]|[ ]|[
    20 | ]|[                                         ]|[ ]|[ ]|[
       | ]|[ ]|[                                     ]|[ ]|[ ]|[
       | ]|[ ]|[                                     ]|[ ]|[ ]|[
       | ]|[ ]|[                                 ]|[ ]|[ ]|[ ]|[
    10 | ]|[ ]|[ ]|[                             ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[     ]|[                     ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[     ]|[ ]|[                 ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[ ]|[ ]|[ ]|[
     0 +-050-022-012-004-009-006-003-002-003-005-017-059-041-031
        Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13|
            Fri 02  Sun 04  Tue 06  Thu 08  Sat 10  Mon 12  Wed 14

 A quiet fortnight on python-dev; the conference a week ago is
 responsible for some of that, but also discussion has been springing
 up on other mailing lists (including the types-sig, doc-sig,
 python-iter and stackless lists, and those are just the ones your
 author is subscribed to).


   * Bug Fix Releases *

 Aahz posted a proposal for a 2.0.1 release, fixing the bugs that have
 been found in 2.0 but not adding the new features.

  <http://mail.python.org/pipermail/python-dev/2001-March/013389.html>

 Guido's response was, essentially, "Good idea, but I don't have the
 time to put into it", and that the wider community would have to put
 in some of the donkey work if this is going to happen.  Signs so far
 are encouraging.


    * Numerics *

 Moshe Zadka posted three new PEP-drafts:

  <http://mail.python.org/pipermail/python-dev/2001-March/013435.html>

 which on discussion became four new PEPs, which are not yet online
 (hint, hint).

 The four titles are

    Unifying Long Integers and Integers
    Non-integer Division
    Adding a Rational Type to Python
    Adding a Rational Literal to Python

 and they will appear fairly soon at

  <http://python.sourceforge.net/peps/pep-0237.html>
  <http://python.sourceforge.net/peps/pep-0238.html>
  <http://python.sourceforge.net/peps/pep-0239.html>
  <http://python.sourceforge.net/peps/pep-0240.html>

 respectively.

 Although pedantically falling slightly out of the remit of this
 summary, I should mention Guido's partial BDFL pronouncement:

  <http://mail.python.org/pipermail/python-dev/2001-March/013587.html>

 A new mailing list had been setup to discuss these issues:

  <http://lists.sourceforge.net/lists/listinfo/python-numerics>


    * Revive the types-sig? *

 Paul Prescod has single-handedly kicked the types-sig into life
 again.

  <http://mail.python.org/sigs/types-sig/>

 The discussion this time seems to be centered on interfaces and how to
 use them effectively.  You never know, we might get somewhere this
 time!

    * stackless *

 Jeremy Hylton posted some comments on Gordon McMillan's new draft of
 the stackless PEP (PEP 219) and the stackless dev day discussion at
 Spam 9.

  <http://mail.python.org/pipermail/python-dev/2001-March/013494.html>

 The discussion has mostly focussed on technical issues; there has
 been no comment on if or when the core Python will become stackless.


    * miscellanea *

 There was some discussion on nested scopes, but mainly on
 implementation issues.  Thomas Wouters promised <wink> to sort out
 the "continue in finally: clause" wart.

Cheers,
M.




From esr at golux.thyrsus.com  Thu Mar 15 19:35:30 2001
From: esr at golux.thyrsus.com (Eric)
Date: Thu, 15 Mar 2001 10:35:30 -0800
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: <200103142305.SAA05872@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Wed, Mar 14, 2001 at 06:05:50PM -0500
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com>
Message-ID: <20010315103530.C1530@thyrsus.com>

Guido van Rossum <guido at digicool.com>:
> > I have fixed some obvious errors (use of the deprecated 'cmp' module;
> > use of regex) but I have encountered run-time errors that are beyond
> > my competence to fix.  From a cursory inspection of the code it looks
> > to me like the freeze tools need adaptation to the new
> > distutils-centric build process.
> 
> The last maintainers were me and Mark Hammond, but neither of us has
> time to look into this right now.  (At least I know I don't.)
> 
> What kind of errors do you encounter?

After cleaning up the bad imports, use of regex, etc, first thing I see
is an assertion failure in the module finder.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

"They that can give up essential liberty to obtain a little temporary 
safety deserve neither liberty nor safety."
	-- Benjamin Franklin, Historical Review of Pennsylvania, 1759.



From guido at digicool.com  Thu Mar 15 19:49:21 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 13:49:21 -0500
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: Your message of "Thu, 15 Mar 2001 10:35:30 PST."
             <20010315103530.C1530@thyrsus.com> 
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com>  
            <20010315103530.C1530@thyrsus.com> 
Message-ID: <200103151849.NAA09878@cj20424-a.reston1.va.home.com>

> > What kind of errors do you encounter?
> 
> After cleaning up the bad imports, use of regex, etc, first thing I see
> is an assertion failure in the module finder.

Are you sure you are using the latest CVS version of freeze?  I didn't
have to clean up any bad imports -- it just works for me.  But maybe
I'm not using all the features?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 19:49:37 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 19:49:37 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> (fredrik@effbot.org)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid>
Message-ID: <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de>

> it's probably best to upgrade to the current SRE code base.

I'd be concerned about the "pure bugfix" nature of the current SRE
code base. It is probably minor things, like the addition of

+    PyDict_SetItemString(
+        d, "MAGIC", (PyObject*) PyInt_FromLong(SRE_MAGIC)
+        );

+# public symbols
+__all__ = [ "match", "search", "sub", "subn", "split", "findall",
+    "compile", "purge", "template", "escape", "I", "L", "M", "S", "X",
+    "U", "IGNORECASE", "LOCALE", "MULTILINE", "DOTALL", "VERBOSE",
+    "UNICODE", "error" ]
+

+DEBUG = sre_compile.SRE_FLAG_DEBUG # dump pattern after compilation

-    def getgroup(self, name=None):
+    def opengroup(self, name=None):

The famous last words here are "those changes can do no
harm". However, somebody might rely on Pattern objects having a
getgroup method (even though it is not documented). Some code (relying
on undocumented features) may break with 2.1, which is acceptable; it
is not acceptable for a bugfix release.

For the bugfix release, I'd feel much better if a clear set of pure
bug fixes were identified, along with a list of bugs they fix. So "no
new feature" would rule out "no new constant named MAGIC" (*).

If a "pure bugfix" happens to break something as well, we can atleast
find out what it fixed in return, and then probably find that the fix
justified the breakage.

Regards,
Martin

(*) There are also new constants AT_BEGINNING_STRING, but it appears
that it was introduced as response to a bug report.



From esr at golux.thyrsus.com  Thu Mar 15 19:54:17 2001
From: esr at golux.thyrsus.com (Eric)
Date: Thu, 15 Mar 2001 10:54:17 -0800
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: <200103151849.NAA09878@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 15, 2001 at 01:49:21PM -0500
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com> <20010315103530.C1530@thyrsus.com> <200103151849.NAA09878@cj20424-a.reston1.va.home.com>
Message-ID: <20010315105417.J1530@thyrsus.com>

Guido van Rossum <guido at digicool.com>:
> Are you sure you are using the latest CVS version of freeze?  I didn't
> have to clean up any bad imports -- it just works for me.  But maybe
> I'm not using all the features?

I'll cvs update and check.  Thanks.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Still, if you will not fight for the right when you can easily
win without bloodshed, if you will not fight when your victory
will be sure and not so costly, you may come to the moment when
you will have to fight with all the odds against you and only a
precarious chance for survival. There may be a worse case.  You
may have to fight when there is no chance of victory, because it
is better to perish than to live as slaves.
	--Winston Churchill



From skip at pobox.com  Thu Mar 15 20:14:59 2001
From: skip at pobox.com (Skip Montanaro)
Date: Thu, 15 Mar 2001 13:14:59 -0600 (CST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103150614.BAA04221@panix6.panix.com>
References: <200103150614.BAA04221@panix6.panix.com>
Message-ID: <15025.5299.651586.244121@beluga.mojam.com>

    aahz> Starting with Python 2.0, all feature releases are required to
    aahz> have the form X.Y; patch releases will always be of the form
    aahz> X.Y.Z.  To clarify the distinction between a bug fix release and a
    aahz> patch release, all non-bug fix patch releases will have the suffix
    aahz> "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
    aahz> bug fix release; and "2.1.2p" is a patch release that contains
    aahz> minor feature enhancements.

I don't understand the need for (or fundamental difference between) bug fix
and patch releases.  If 2.1 is the feature release and 2.1.1 is a bug fix
release, is 2.1.2p a branch off of 2.1.2 or 2.1.1?

    aahz> The Patch Czar is the counterpart to the BDFL for patch releases.
    aahz> However, the BDFL and designated appointees retain veto power over
    aahz> individual patches and the decision of whether to label a patch
    aahz> release as a bug fix release.

I propose that instead of (or in addition to) the Patch Czar you have a
Release Shepherd (RS) for each feature release, presumably someone motivated
to help maintain that particular release.  This person (almost certainly
someone outside PythonLabs) would be responsible for the bug fix releases
associated with a single feature release.  Your use of 2.1's sre as a "small
feature change" for 2.0 and 1.5.2 is an example where having an RS for each
feature release would be worthwhile.  Applying sre 2.1 to the 2.0 source
would probably be reasonably easy.  Adding it to 1.5.2 would be much more
difficult (no Unicode), and so would quite possibly be accepted by the 2.0
RS and rejected by the 1.5.2 RS.

As time passes, interest in further bug fix releases for specific feature
releases will probably wane.  When interest drops far enough the RS could
simply declare that branch closed and move on to other things.

I envision the Patch Czar voting a general yea or nay on a specific patch,
then passing it along to all the current RSs, who would make the final
decision about whether that patch is appropriate for the release they are
managing.

I suggest dumping the patch release concept and just going with bug fix
releases.  The system will be complex enough without them.  If it proves
desirable later, you can always add them.

Skip



From fredrik at effbot.org  Thu Mar 15 20:25:45 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Thu, 15 Mar 2001 20:25:45 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de>
Message-ID: <03d101c0ad85$bc812610$e46940d5@hagrid>

martin wrote:

> I'd be concerned about the "pure bugfix" nature of the current SRE
> code base. 

well, unlike you, I wrote the code.

> -    def getgroup(self, name=None):
> +    def opengroup(self, name=None):
> 
> The famous last words here are "those changes can do no
> harm". However, somebody might rely on Pattern objects having a
> getgroup method (even though it is not documented).

it may sound weird, but I'd rather support people who rely on regular
expressions working as documented...

> For the bugfix release, I'd feel much better if a clear set of pure
> bug fixes were identified, along with a list of bugs they fix. So "no
> new feature" would rule out "no new constant named MAGIC" (*).

what makes you so sure that MAGIC wasn't introduced to deal with
a bug report?  (hint: it was)

> If a "pure bugfix" happens to break something as well, we can atleast
> find out what it fixed in return, and then probably find that the fix
> justified the breakage.

more work, and far fewer bugs fixed.  let's hope you have lots of
volunteers lined up...

Cheers /F




From fredrik at pythonware.com  Thu Mar 15 20:43:11 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 15 Mar 2001 20:43:11 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
References: <200103150614.BAA04221@panix6.panix.com> <15025.5299.651586.244121@beluga.mojam.com>
Message-ID: <000f01c0ad88$2cd4b970$e46940d5@hagrid>

skip wrote:
> I suggest dumping the patch release concept and just going with bug fix
> releases.  The system will be complex enough without them.  If it proves
> desirable later, you can always add them.

agreed.

> Applying sre 2.1 to the 2.0 source would probably be reasonably easy.
> Adding it to 1.5.2 would be much more difficult (no Unicode), and so
> would quite possibly be accepted by the 2.0 RS and rejected by the
> 1.5.2 RS.

footnote: SRE builds and runs just fine under 1.5.2:

    http://www.pythonware.com/products/sre

Cheers /F




From thomas.heller at ion-tof.com  Thu Mar 15 21:00:19 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Thu, 15 Mar 2001 21:00:19 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
Message-ID: <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>

[Martin v. Loewis]
> I'd also propose that 2.0.1, at a minimum, should contain the patches
> listed on the 2.0 MoinMoin
> 
> http://www.python.org/cgi-bin/moinmoin
> 
So how should requests for patches be submitted?
Should I enter them into the wiki, post to python-dev,
email to aahz?

I would kindly request two of the fixed bugs I reported to
go into 2.0.1:

Bug id 231064, sys.path not set correctly in embedded python interpreter
Bug id 221965, 10 in xrange(10) returns 1
(I would consider the last one as critical)

Thomas




From aahz at pobox.com  Thu Mar 15 21:11:31 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 12:11:31 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Thomas Heller" at Mar 15, 2001 09:00:19 PM
Message-ID: <200103152011.PAA28835@panix3.panix.com>

> So how should requests for patches be submitted?
> Should I enter them into the wiki, post to python-dev,
> email to aahz?

As you'll note in PEP 6, this is one of the issues that needs some
resolving.  The correct solution long-term will likely involve some
combination of a new mailing list (so python-dev doesn't get overwhelmed)
and SourceForge bug management.  In the meantime, I'm keeping a record.

Part of the problem in simply moving forward is that I am neither on
python-dev myself nor do I have CVS commit privileges; I'm also not much
of a C programmer.  Thomas Wouters and Jeremy Hylton have made statements
that could be interpreted as saying that they're willing to be the Patch
Czar, but while I assume that either would be passed by acclamation, I'm
certainly not going to shove it on them.  If either accepts, I'll be glad
to take on whatever administrative tasks they ask for.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"The overexamined life sure is boring."  --Loyal Mini Onion



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 21:39:14 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 21:39:14 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <03d101c0ad85$bc812610$e46940d5@hagrid> (fredrik@effbot.org)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de> <03d101c0ad85$bc812610$e46940d5@hagrid>
Message-ID: <200103152039.f2FKdEQ22768@mira.informatik.hu-berlin.de>

> > I'd be concerned about the "pure bugfix" nature of the current SRE
> > code base. 
> 
> well, unlike you, I wrote the code.

I am aware of that. My apologies if I suggested otherwise.

> it may sound weird, but I'd rather support people who rely on regular
> expressions working as documented...

That is not weird at all.

> > For the bugfix release, I'd feel much better if a clear set of pure
> > bug fixes were identified, along with a list of bugs they fix. So "no
> > new feature" would rule out "no new constant named MAGIC" (*).
> 
> what makes you so sure that MAGIC wasn't introduced to deal with
> a bug report?  (hint: it was)

I am not sure. What was the bug report that caused its introduction?

> > If a "pure bugfix" happens to break something as well, we can atleast
> > find out what it fixed in return, and then probably find that the fix
> > justified the breakage.
> 
> more work, and far fewer bugs fixed.  let's hope you have lots of
> volunteers lined up...

Nobody has asked *you* to do that work. If you think your time is
better spent in fixing existing bugs instead of back-porting the fixes
to 2.0 - there is nothing wrong with that at all. It all depends on
what the volunteers are willing to do.

Regards,
Martin



From guido at digicool.com  Thu Mar 15 22:14:16 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 16:14:16 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Thu, 15 Mar 2001 20:43:11 +0100."
             <000f01c0ad88$2cd4b970$e46940d5@hagrid> 
References: <200103150614.BAA04221@panix6.panix.com> <15025.5299.651586.244121@beluga.mojam.com>  
            <000f01c0ad88$2cd4b970$e46940d5@hagrid> 
Message-ID: <200103152114.QAA10305@cj20424-a.reston1.va.home.com>

> skip wrote:
> > I suggest dumping the patch release concept and just going with bug fix
> > releases.  The system will be complex enough without them.  If it proves
> > desirable later, you can always add them.
> 
> agreed.

+1

> > Applying sre 2.1 to the 2.0 source would probably be reasonably easy.
> > Adding it to 1.5.2 would be much more difficult (no Unicode), and so
> > would quite possibly be accepted by the 2.0 RS and rejected by the
> > 1.5.2 RS.
> 
> footnote: SRE builds and runs just fine under 1.5.2:
> 
>     http://www.pythonware.com/products/sre

In the specific case of SRE, I'm +1 on keeping the code base in 2.0.1
completely synchronized with 2.1.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 22:32:47 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 22:32:47 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
	(thomas.heller@ion-tof.com)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
Message-ID: <200103152132.f2FLWlE29312@mira.informatik.hu-berlin.de>

> So how should requests for patches be submitted?
> Should I enter them into the wiki, post to python-dev,
> email to aahz?

Personally, I think 2.0.1 should be primarily driven by user requests;
I think this is also the spirit of the PEP. I'm not even sure that
going over the entire code base systematically and copying all bug
fixes is a good idea.

In that sense, having somebody collect these requests is probably the
right approach. In this specific case, I'll take care of them, unless
somebody else proposes a different procedure. For the record, you are
requesting inclusion of

rev 1.23 of PC/getpathp.c
rev 2.21, 2.22 of Objects/rangeobject.c
rev 1.20 of Lib/test/test_b2.py

Interestingly enough, 2.22 of rangeobject.c also adds three attributes
to the xrange object: start, stop, and step. That is clearly a new
feature, so should it be moved into 2.0.1? Otherwise, the fix must be
back-ported to 2.0.

I think it we need a policy decision here, which could probably take
one of three outcomes:
1. everybody with CVS commit access can decide to move patches from
   the mainline to the branch. That would mean I could move these
   patches, and Fredrik Lundh could install the sre code base as-is.

2. the author of the original patch can make that decision. That would
   mean that Fredrik Lundh can still install his code as-is, but I'd
   have to ask Fred's permission.

3. the bug release coordinator can make that decision. That means that
   Aahz must decide.

If it is 1 or 2, some guideline is probably needed as to what exactly
is suitable for inclusion into 2.0.1. Guido has requested "*pure*
bugfixes", which, to me, says

a) sre must be carefully reviewed change for change
b) the three attributes on xrange objects must not appear in 2.0.1

In any case, I'm in favour of a much more careful operation for a
bugfix release. That probably means not all bugs that have been fixed
already will be fixed in 2.0.1; I would not expect otherwise.

Regards,
Martin



From aahz at pobox.com  Thu Mar 15 23:21:12 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 14:21:12 -0800 (PST)
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 15, 2001 06:22:41 PM
Message-ID: <200103152221.RAA16060@panix3.panix.com>

> - In the checkin message, indicate which file version from the
>   mainline is being copied into the release branch.

Sounds good.

> - In Misc/NEWS, indicate what bugs have been fixed by installing these
>   patches. If it was a patch in response to a SF bug report, listing
>   the SF bug id should be sufficient; I've put some instructions into
>   Misc/NEWS on how to retrieve the bug report for a bug id.

Good, too.

> I've done so only for the _tkinter patch, which was both listed as
> critical, and which closed 2 SF bug reports. I've verified that the
> sre_parse patch also closes a number of SF bug reports, but have not
> copied it to the release branch.

I'm a little concerned that the 2.0 branch is being updated without a
2.0.1 target created, but it's quite possible my understanding of how
this should work is faulty.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From aahz at pobox.com  Thu Mar 15 23:34:26 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 14:34:26 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Skip Montanaro" at Mar 15, 2001 01:14:59 PM
Message-ID: <200103152234.RAA16951@panix3.panix.com>

>     aahz> Starting with Python 2.0, all feature releases are required to
>     aahz> have the form X.Y; patch releases will always be of the form
>     aahz> X.Y.Z.  To clarify the distinction between a bug fix release and a
>     aahz> patch release, all non-bug fix patch releases will have the suffix
>     aahz> "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
>     aahz> bug fix release; and "2.1.2p" is a patch release that contains
>     aahz> minor feature enhancements.
> 
> I don't understand the need for (or fundamental difference between) bug fix
> and patch releases.  If 2.1 is the feature release and 2.1.1 is a bug fix
> release, is 2.1.2p a branch off of 2.1.2 or 2.1.1?

That's one of the issues that needs to be resolved if we permit both
patch releases and bug fix releases.  My preference would be that 2.1.2p
is a branch from 2.1.1.

>     aahz> The Patch Czar is the counterpart to the BDFL for patch releases.
>     aahz> However, the BDFL and designated appointees retain veto power over
>     aahz> individual patches and the decision of whether to label a patch
>     aahz> release as a bug fix release.
> 
> I propose that instead of (or in addition to) the Patch Czar you have a
> Release Shepherd (RS) for each feature release, presumably someone motivated
> to help maintain that particular release.  This person (almost certainly
> someone outside PythonLabs) would be responsible for the bug fix releases
> associated with a single feature release.  Your use of 2.1's sre as a "small
> feature change" for 2.0 and 1.5.2 is an example where having an RS for each
> feature release would be worthwhile.  Applying sre 2.1 to the 2.0 source
> would probably be reasonably easy.  Adding it to 1.5.2 would be much more
> difficult (no Unicode), and so would quite possibly be accepted by the 2.0
> RS and rejected by the 1.5.2 RS.

That may be a good idea.  Comments from others?  (Note that in the case
of sre, I was aware that Fredrik had already backported to both 2.0 and
1.5.2.)

> I suggest dumping the patch release concept and just going with bug fix
> releases.  The system will be complex enough without them.  If it proves
> desirable later, you can always add them.

Well, that was my original proposal before turning this into an official
PEP.  The stumbling block was the example of the case-sensitive import
patch (that permits Python's use on BeOS and MacOS X) for 2.1.  Both
Guido and Tim stated their belief that this was a "feature" and not a
"bug fix" (and I don't really disagree with them).  This leaves the
following options (assuming that backporting the import fix doesn't break
one of the Prohibitions):

* Change the minds of Guido/Tim to make the import issue a bugfix.

* Don't backport case-sensitive imports to 2.0.

* Permit minor feature additions/changes.

If we choose that last option, I believe a distinction should be drawn
between releases that contain only bugfixes and releases that contain a
bit more.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From thomas at xs4all.net  Thu Mar 15 23:37:37 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:37:37 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103150614.BAA04221@panix6.panix.com>; from aahz@panix.com on Thu, Mar 15, 2001 at 01:14:54AM -0500
References: <200103150614.BAA04221@panix6.panix.com>
Message-ID: <20010315233737.B29286@xs4all.nl>

On Thu, Mar 15, 2001 at 01:14:54AM -0500, aahz at panix.com wrote:
> [posted to c.l.py.announce and c.l.py; followups to c.l.py; cc'd to
> python-dev]

>     Patch releases are required to adhere to the following
>     restrictions:

>     1. There must be zero syntax changes.  All .pyc and .pyo files
>        must work (no regeneration needed) with all patch releases
>        forked off from a feature release.

Hmm... Would making 'continue' work inside 'try' count as a bugfix or as a
feature ? It's technically not a syntax change, but practically it is.
(Invalid syntax suddenly becomes valid.) 

>   Bug Fix Releases

>     Bug fix releases are a subset of all patch releases; it is
>     prohibited to add any features to the core in a bug fix release.
>     A patch release that is not a bug fix release may contain minor
>     feature enhancements, subject to the Prohibitions section.

I'm not for this 'bugfix release', 'patch release' difference. The
numbering/naming convention is too confusing, not clear enough, and I don't
see the added benifit of adding limited features. If people want features,
they should go and get a feature release. The most important bit in patch
('bugfix') releases is not to add more bugs, and rewriting parts of code to
fix a bug is something that is quite likely to insert more bugs. Sure, as
the patch coder, you are probably certain there are no bugs -- but so was
whoever added the bug in the first place :)

>     The Patch Czar decides when there are a sufficient number of
>     patches to warrant a release.  The release gets packaged up,
>     including a Windows installer, and made public as a beta release.
>     If any new bugs are found, they must be fixed and a new beta
>     release publicized.  Once a beta cycle completes with no new bugs
>     found, the package is sent to PythonLabs for certification and
>     publication on python.org.

>     Each beta cycle must last a minimum of one month.

This process probably needs a firm smack with reality, but that would have
to wait until it meets some, first :) Deciding when to do a bugfix release
is very tricky: some bugs warrant a quick release, but waiting to assemble
more is generally a good idea. The whole beta cycle and windows
installer/RPM/etc process is also a bottleneck. Will Tim do the Windows
Installer (or whoever does it for the regular releases) ? If he's building
the installer anyway, why can't he 'bless' the release right away ?

I'm also not sure if a beta cycle in a bugfix release is really necessary,
especially a month long one. Given that we have a feature release planned
each 6 months, and a feature release has generally 2 alphas and 2 betas,
plus sometimes a release candidate, plus the release itself, and a bugfix
release would have one or two betas too, and say that we do two betas in
those six months, that would make 10+ 'releases' of various form in those 6
months. Ain't no-one[*] going to check them out for a decent spin, they'll
just wait for the final version.

>     Should the first patch release following any feature release be
>     required to be a bug fix release?  (Aahz proposes "yes".)
>     Is it allowed to do multiple forks (e.g. is it permitted to have
>     both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)
>     Does it makes sense for a bug fix release to follow a patch
>     release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)

More reasons not to have separate featurebugfixreleasethingies and
bugfix-releases :)

>     What is the equivalent of python-dev for people who are
>     responsible for maintaining Python?  (Aahz proposes either
>     python-patch or python-maint, hosted at either python.org or
>     xs4all.net.)

It would probably never be hosted at .xs4all.net. We use the .net address
for network related stuff, and as a nice Personality Enhancer (read: IRC
dick extender) for employees. We'd be happy to host stuff, but I would
actually prefer to have it under a python.org or some other python-related
domainname. That forestalls python questions going to admin at xs4all.net :) A
small logo somewhere on the main page would be nice, but stuff like that
should be discussed if it's ever an option, not just because you like the
name 'XS4ALL' :-)

>     Does SourceForge make it possible to maintain both separate and
>     combined bug lists for multiple forks?  If not, how do we mark
>     bugs fixed in different forks?  (Simplest is to simply generate a
>     new bug for each fork that it gets fixed in, referring back to the
>     main bug number for details.)

We could make it a separate SF project, just for the sake of keeping
bugreports/fixes in the maintenance branch and the head branch apart. The
main Python project already has an unwieldy number of open bugreports and
patches.

I'm also for starting the maintenance branch right after the real release,
and start adding bugfixes to it right away, as soon as they show up. Keeping
up to date on bufixes to the head branch is then as 'simple' as watching
python-checkins. (Up until the fact a whole subsystem gets rewritten, that
is :) People should still be able to submit bugfixes for the maintenance
branch specifically.

And I'm still willing to be the patch monkey, though I don't think I'm the
only or the best candidate. I'll happily contribute regardless of who gets
the blame :)

[*] There, that better, Moshe ?
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Mar 15 23:44:21 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:44:21 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103152234.RAA16951@panix3.panix.com>; from aahz@pobox.com on Thu, Mar 15, 2001 at 02:34:26PM -0800
References: <no.id> <200103152234.RAA16951@panix3.panix.com>
Message-ID: <20010315234421.C29286@xs4all.nl>

On Thu, Mar 15, 2001 at 02:34:26PM -0800, Aahz Maruch wrote:

[ How to get case-insensitive import fixed in 2.0.x ]

> * Permit minor feature additions/changes.

> If we choose that last option, I believe a distinction should be drawn
> between releases that contain only bugfixes and releases that contain a
> bit more.

We could make the distinction in the release notes. It could be a
'PURE BUGFIX RELEASE' or a 'FEATURE FIX RELEASE'. Bugfix releases just fix
bugs, that is, wrong behaviour. feature fix releases fix misfeatures, like
the case insensitive import issues. The difference between the two should be
explained in the paragraph following the header, for *each* release. For
example,

This is a 		PURE BUGFIX RELEASE.
This means that it only fixes behaviour that was previously giving an error,
or providing obviously wrong results. Only code relying out the outcome of
obviously incorrect code can be affected.

and

This is a 		FEATURE FIX RELEASE
This means that the (unexpected) behaviour of one or more features was
changed. This is a low-impact change that is unlikely to affect anyone, but
it is theoretically possible. See below for a list of possible effects: 
[ list of mis-feature-fixes and their result. ]

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From greg at cosc.canterbury.ac.nz  Thu Mar 15 23:45:50 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 11:45:50 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB0B863.52DFB61C@tismer.com>
Message-ID: <200103152245.LAA05494@s454.cosc.canterbury.ac.nz>

> But most probably, it will run interpreters from time to time.
> These can be told to take the scheduling role on.

You'll have to expand on that. My understanding is that
all the uthreads would have to run in a single C-level
interpreter invocation which can never be allowed to
return. I don't see how different interpreters can be
made to "take on" this role. If that were possible,
there wouldn't be any problem in the first place.

> It does not matter on which interpreter level we are,
> we just can't switch to frames of other levels. But
> even leaving a frame chain, and re-entering later
> with a different stack level is no problem.

You'll have to expand on that, too. Those two sentences
sound contradictory to me.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Thu Mar 15 23:54:08 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:54:08 +0100
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <200103152221.RAA16060@panix3.panix.com>; from aahz@pobox.com on Thu, Mar 15, 2001 at 02:21:12PM -0800
References: <no.id> <200103152221.RAA16060@panix3.panix.com>
Message-ID: <20010315235408.D29286@xs4all.nl>

On Thu, Mar 15, 2001 at 02:21:12PM -0800, Aahz Maruch wrote:

> I'm a little concerned that the 2.0 branch is being updated without a
> 2.0.1 target created, but it's quite possible my understanding of how
> this should work is faulty.

Probably (no offense intended) :) A maintenance branch was created together
with the release tag. A branch is a tag with an even number of dots. You can
either use cvs commit magic to commit a version to the branch, or you can
checkout a new tree or update a current tree with the branch-tag given in a
'-r' option. The tag then becomes sticky: if you run update again, it will
update against the branch files. If you commit, it will commit to the branch
files.

I keep the Mailman 2.0.x and 2.1 (head) branches in two different
directories, the 2.0-branch one checked out with:

cvs -d twouters at cvs.mailman.sourceforge.net:/cvsroot/mailman co -r \
Release_2_0_1-branch mailman; mv mailman mailman-2.0.x

It makes for very administration between releases. The one time I tried to
automatically import patches between two branches, I fucked up Mailman 2.0.2
and Barry had to release 2.0.3 less than a week later ;)

When you have a maintenance branch and you want to make a release in it, you
simply update your tree to the current state of that branch, and tag all the
files with tag (in Mailman) Release_2_0_3. You can then check out
specifically those files (and not changes that arrived later) and make a
tarball/windows install out of them.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From aahz at pobox.com  Fri Mar 16 00:17:29 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 15:17:29 -0800 (PST)
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <20010315235408.D29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:54:08 PM
Message-ID: <200103152317.SAA04392@panix2.panix.com>

Thanks.  Martin already cleared it up for me in private e-mail.  This
kind of knowledge lack is why I shouldn't be the Patch Czar, at least
not initially.  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From greg at cosc.canterbury.ac.nz  Fri Mar 16 00:29:52 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:29:52 +1300 (NZDT)
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>
Message-ID: <200103152329.MAA05500@s454.cosc.canterbury.ac.nz>

Tim Peters <tim.one at home.com>:
> [Guido]
>> Using decimal floating point won't fly either,
> If you again mean "by default", also agreed.

But if it's *not* by default, it won't stop naive users
from getting tripped up.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From aahz at pobox.com  Fri Mar 16 00:44:05 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 15:44:05 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315234421.C29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:44:21 PM
Message-ID: <200103152344.SAA06969@panix2.panix.com>

Thomas Wouters:
>
> [ How to get case-insensitive import fixed in 2.0.x ]
> 
> Aahz:
>>
>> * Permit minor feature additions/changes.
>> 
>> If we choose that last option, I believe a distinction should be drawn
>> between releases that contain only bugfixes and releases that contain a
>> bit more.
> 
> We could make the distinction in the release notes. It could be a
> 'PURE BUGFIX RELEASE' or a 'FEATURE FIX RELEASE'. Bugfix releases just fix
> bugs, that is, wrong behaviour. feature fix releases fix misfeatures, like
> the case insensitive import issues. The difference between the two should be
> explained in the paragraph following the header, for *each* release. For
> example,

I shan't whine if BDFL vetoes it, but I think this info ought to be
encoded in the version number.  Other than that, it seems that we're
mostly quibbling over wording, and it doesn't matter much to me how we
do it; your suggestion is fine with me.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From greg at cosc.canterbury.ac.nz  Fri Mar 16 00:46:07 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:46:07 +1300 (NZDT)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103152234.RAA16951@panix3.panix.com>
Message-ID: <200103152346.MAA05504@s454.cosc.canterbury.ac.nz>

aahz at pobox.com (Aahz Maruch):

> My preference would be that 2.1.2p is a branch from 2.1.1.

That could be a rather confusing numbering system.

Also, once there has been a patch release, does that mean that
the previous sequence of bugfix-only releases is then closed off?

Even a minor feature addition has the potential to introduce
new bugs. Some people may not want to take even that small
risk, but still want to keep up with bug fixes, so there may
be a demand for a further bugfix release to 2.1.1 after
2.1.2p is released. How would such a release be numbered?

Seems to me that if you're going to have minor feature releases
at all, you need a four-level numbering system: W.X.Y.Z,
where Y is the minor feature release number and Z the bugfix
release number.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Mar 16 00:48:31 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:48:31 +1300 (NZDT)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315234421.C29286@xs4all.nl>
Message-ID: <200103152348.MAA05507@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas at xs4all.net>:

> This means that the (unexpected) behaviour of one or more features was
> changed. This is a low-impact change that is unlikely to affect
> anyone

Ummm... if it's so unlikely to affect anything, is it really
worth making a special release for it?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Fri Mar 16 02:34:52 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 15 Mar 2001 20:34:52 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103152329.MAA05500@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPJJFAA.tim_one@email.msn.com>

[Guido]
> Using decimal floating point won't fly either,

[Tim]
> If you again mean "by default", also agreed.

[Greg Ewing]
> But if it's *not* by default, it won't stop naive users
> from getting tripped up.

Naive users are tripped up by many things.  I want to stop them in *Python*
from stumbling over 1/3, not over 1./3 or 0.5.  Changing the meaning of the
latter won't fly, not at this stage in the language's life; if the language
were starting from scratch, sure, but it's not.

I have no idea why Guido is so determined that the *former* (1/3) yield
binary floating point too (as opposed to something saner, be it rationals or
decimal fp), but I'm still trying to provoke him into explaining that part
<0.5 wink>.

I believe users (both newbies and experts) would also benefit from an
explicit way to spell a saner alternative using a tagged fp notation.
Whatever that alternative may be, I want 1/3 (not 1./3. or 0.5 or 1e100) to
yield that too without futzing with tags.




From tim_one at email.msn.com  Fri Mar 16 03:25:41 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 15 Mar 2001 21:25:41 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103151542.KAA09191@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPKJFAA.tim_one@email.msn.com>

[Tim]
> I think you'd have a very hard time finding any pre-college
> level teacher who wants to teach binary fp.  Your ABC experience is
> consistent with that too.

[Guido]
> "Want to", no.  But whether they're teaching Java, C++, or Pascal,
> they have no choice: if they need 0.5, they'll need binary floating
> point, whether they explain it adequately or not.  Possibly they are
> all staying away from the decimal point completely, but I find that
> hard to believe.

Pascal is the only language there with any claim to newbie friendliness
(Stroustrup's essays notwithstanding).  Along with C, it grew up in the era
of mondo expensive mainframes with expensive binary floating-point hardware
(the CDC boxes Wirth used were designed by S. Cray, and like all such were
fast-fp-at-any-cost designs).

As the earlier Kahan quote said, the massive difference between then and now
is the "innocence" of a vastly larger computer audience.  A smaller
difference is that Pascal is effectively dead now.  C++ remains constrained
by compatibility with C, although any number of decimal class libraries are
available for it, and run as fast as C++ can make them run.  The BigDecimal
class has been standard in Java since 1.1, but, since it's Java, it's so
wordy to use that it's as tedious as everything else in Java for more than
occasional use.

OTOH, from Logo to DrScheme, with ABC and REXX in between, *some* builtin
alternative to binary fp is a feature of all languages I know of that aim not
to drive newbies insane.  "Well, its non-integer arithmetic is no worse than
C++'s" is no selling point for Python.

>>>  But other educators (e.g. Randy Pausch, and the folks who did
>>> VPython) strongly recommend this based on user observation, so
>>> there's hope.

>> Alice is a red herring!  What they wanted was for 1/2 *not* to
>> mean 0.  I've read the papers and dissertations too -- there was
>> no plea for binary fp in those, just that division not throw away
>> info.

> I never said otherwise.

OK, but then I don't know what it is you were saying.  Your sentence
preceding "... strongly recommend this ..." ended:

    this would mean an approximate, binary f.p. result for 1/3, and
    this does not seem to have the support of the educators ...

and I assumed the "this" in "Randy Paush, and ... VPython strongly recommend
this" also referred to "an approximate, binary f.p. result for 1/3".  Which
they did not strongly recommend.  So I'm lost as to what you're saying they
did strongly recommend.

Other people in this thread have said that 1./3. should give an exact
rational or a decimal fp result, but I have not.  I have said 1/3 should not
be 0, but there are at least 3 schemes on the table which deliver a non-zero
result for 1/3, only one of which is to deliver a binary fp result.

> It just boils down to binary fp as the only realistic choice.

For 1./3. and 0.67 I agree (for backward compatibility), but I've seen no
identifiable argument in favor of binary fp for 1/3.  Would Alice's users be
upset if that returned a rational or decimal fp value instead?  I'm tempted
to say "of course not", but I really haven't asked them <wink>.




From tim.one at home.com  Fri Mar 16 04:16:12 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 15 Mar 2001 22:16:12 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <3AB0EE66.37E6C633@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com>

[M.-A. Lemburg]
> Just out of curiosity: is there a usable decimal type implementation
> somewhere on the net which we could beat on ?

ftp://ftp.python.org/pub/python/
    contrib-09-Dec-1999/DataStructures/FixedPoint.py

It's more than two years old, and regularly mentioned on c.l.py.  From the
tail end of the module docstring:

"""
The following Python operators and functions accept FixedPoints in the
expected ways:

    binary + - * / % divmod
        with auto-coercion of other types to FixedPoint.
        + - % divmod  of FixedPoints are always exact.
        * / of FixedPoints may lose information to rounding, in
            which case the result is the infinitely precise answer
            rounded to the result's precision.
        divmod(x, y) returns (q, r) where q is a long equal to
            floor(x/y) as if x/y were computed to infinite precision,
            and r is a FixedPoint equal to x - q * y; no information
            is lost.  Note that q has the sign of y, and abs(r) < abs(y).
    unary -
    == != < > <= >=  cmp
    min  max
    float  int  long    (int and long truncate)
    abs
    str  repr
    hash
    use as dict keys
    use as boolean (e.g. "if some_FixedPoint:" -- true iff not zero)
"""

> I for one would be very interested in having a decimal type
> around (with fixed precision and scale),

FixedPoint is unbounded "to the left" of the point but maintains a fixed and
user-settable number of (decimal) digits "after the point".  You can easily
subclass it to complain about overflow, or whatever other damn-fool thing you
think is needed <wink>.

> since databases rely on these a lot and I would like to assure
> that passing database data through Python doesn't cause any data
> loss due to rounding issues.

Define your ideal API and maybe I can implement it someday.  My employer also
has use for this.  FixedPoint.py is better suited to computation than I/O,
though, since it uses Python longs internally, and conversion between
BCD-like formats and Python longs is expensive.

> If there aren't any such implementations yet, the site that Tim
> mentioned  looks like a good starting point for heading into this
> direction... e.g. for mx.Decimal ;-)
>
> 	http://www2.hursley.ibm.com/decimal/

FYI, note that Cowlishaw is moving away from REXX's "string of ASCII digits"
representation toward a variant of BCD encoding.





From barry at digicool.com  Fri Mar 16 04:31:08 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:31:08 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
References: <200103150614.BAA04221@panix6.panix.com>
	<20010315233737.B29286@xs4all.nl>
Message-ID: <15025.35068.826947.482650@anthem.wooz.org>

Three things to keep in mind, IMO.  First, people dislike too many
choices.  As the version numbering scheme and branches go up, the
confusion level rises (it's probably like for each dot or letter you
add to the version number, the number of people who understand which
one to grab goes down an order of magnitude. :).  I don't think it
makes any sense to do more than one branch from the main trunk, and
then do bug fix releases along that branch whenever and for as long as
it seems necessary.

Second, you probably do not need a beta cycle for patch releases.
Just do the 2.0.2 release and if you've royally hosed something (which
is unlikely but possible) turn around and do the 2.0.3 release <wink>
a.s.a.p.

Third, you might want to create a web page, maybe a wiki is perfect
for this, that contains the most important patches.  It needn't
contain everything that goes into a patch release, but it can if
that's not too much trouble.  A nice explanation for each fix would
allow a user who doesn't want to apply the whole patch or upgrade to
just apply the most critical bug fixes for their application.  This
can get more complicated as the dependencies b/w patches goes up, so
it may not be feasible for all patches, or for the entire lifetime of
the maintenance branch.

-Barry



From barry at digicool.com  Fri Mar 16 04:40:51 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:40:51 -0500
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
	<0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
	<200103152132.f2FLWlE29312@mira.informatik.hu-berlin.de>
Message-ID: <15025.35651.57084.276629@anthem.wooz.org>

>>>>> "MvL" == Martin v Loewis <martin at loewis.home.cs.tu-berlin.de> writes:

    MvL> In any case, I'm in favour of a much more careful operation
    MvL> for a bugfix release. That probably means not all bugs that
    MvL> have been fixed already will be fixed in 2.0.1; I would not
    MvL> expect otherwise.

I agree.  I think each patch will require careful consideration by the
patch czar, and some will be difficult calls.  You're just not going
to "fix" everything in 2.0.1 that's fixed in 2.1.  Give it your best
shot and keep the overhead for making a new patch release low.  That
way, if you screw up or get a hue and cry for not including a patch
everyone else considers critical, you can make a new patch release
fairly soon thereafter.

-Barry



From barry at digicool.com  Fri Mar 16 04:57:40 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:57:40 -0500
Subject: [Python-Dev] Re: Preparing 2.0.1
References: <no.id>
	<200103152221.RAA16060@panix3.panix.com>
	<20010315235408.D29286@xs4all.nl>
Message-ID: <15025.36660.87154.993275@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

Thanks for the explanation Thomas, that's exactly how I manage the
Mailman trees too.  A couple of notes.

    TW> I keep the Mailman 2.0.x and 2.1 (head) branches in two
    TW> different directories, the 2.0-branch one checked out with:

    TW> cvs -d twouters at cvs.mailman.sourceforge.net:/cvsroot/mailman
    TW> co -r \ Release_2_0_1-branch mailman; mv mailman mailman-2.0.x
----------------^^^^^^^^^^^^^^^^^^^^

If I had to do it over again, I would have called this the
Release_2_0-maint branch.  I think that makes more sense when you see
the Release_2_0_X tags along that branch.

This was really my first foray back into CVS branches after my last
disaster (the string-meths branch on Python).  Things are working much
better this time, so I guess I understand how to use them now...

...except that I hit a small problem with CVS.  When I was ready to
release a new patch release along the maintenance branch, I wasn't
able to coax CVS into giving me a log between two tags on the branch.
E.g. I tried:

    cvs log -rRelease_2_0_1 -rRelease_2_0_2

(I don't actually remember at the moment whether it's specified like
this or with a colon between the release tags, but that's immaterial).

The resulting log messages did not include any of the changes between
those two branches.  However a "cvs diff" between the two tags /did/
give me the proper output, as did a "cvs log" between the branch tag
and the end of the branch.

Could have been a temporary glitch in CVS or maybe I was dipping into
the happy airplane pills a little early.  I haven't tried it again
since.

took-me-about-three-hours-to-explain-this-to-jeremy-on-the-way-to-ipc9
    -but-the-happy-airplane-pills-were-definitely-partying-in-my
    -bloodstream-at-the-time-ly y'rs,

-Barry



From tim.one at home.com  Fri Mar 16 07:34:33 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 16 Mar 2001 01:34:33 -0500
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: <200103151644.LAA09360@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEAGJGAA.tim.one@home.com>

[Martin]
> I have to following specific questions: If a patch is accepted, should
> it be closed also? If so, how should the resolution change if it is
> also committed?

[Guido]
> A patch should only be closed after it has been committed; otherwise
> it's too easy to lose track of it.  So I guess the proper sequence is
>
> 1. accept; Resolution set to Accepted
>
> 2. commit; Status set to Closed
>
> I hope the owner of the sf-faq document can fix it.

Heh -- there is no such person.  Since I wrote that Appendix to begin with, I
checked in appropriate changes:  yes, status should be Open if and only if
something still needs to be done (even if that's only a commit); status
should be Closed or Deleted if and only if nothing more should ever be done.




From tim.one at home.com  Fri Mar 16 08:02:08 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 16 Mar 2001 02:02:08 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103151539.QAA01573@core.inf.ethz.ch>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>

[Samuele Pedroni]
> ...
> I was thinking about stuff like generators used everywhere,
> but that is maybe just uninformed panicing. They are the
> kind of stuff that make programmers addictive <wink>.

Jython is to CPython as Jcon is to Icon, and *every* expression in Icon is "a
generator".

    http://www.cs.arizona.edu/icon/jcon/

is the home page, and you can get a paper from there detailing the Jcon
implementation.  It wasn't hard, and it's harder in Jcon than it would be in
Jython because Icon generators are also tied into an ubiquitous backtracking
scheme ("goal-directed evaluation").

Does Jython have an explicit object akin to CPython's execution frame?  If
so, 96.3% of what's needed for generators is already there.

At the other end of the scale, Jcon implements Icon's co-expressions (akin to
coroutines) via Java threads.




From tismer at tismer.com  Fri Mar 16 11:37:30 2001
From: tismer at tismer.com (Christian Tismer)
Date: Fri, 16 Mar 2001 11:37:30 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103152245.LAA05494@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB1ECEA.CD0FFC51@tismer.com>

This is going to be a hard task.
Well, let me give it a try...

Greg Ewing wrote:
> 
> > But most probably, it will run interpreters from time to time.
> > These can be told to take the scheduling role on.
> 
> You'll have to expand on that. My understanding is that
> all the uthreads would have to run in a single C-level
> interpreter invocation which can never be allowed to
> return. I don't see how different interpreters can be
> made to "take on" this role. If that were possible,
> there wouldn't be any problem in the first place.
> 
> > It does not matter on which interpreter level we are,
> > we just can't switch to frames of other levels. But
> > even leaving a frame chain, and re-entering later
> > with a different stack level is no problem.
> 
> You'll have to expand on that, too. Those two sentences
> sound contradictory to me.

Hmm. I can't see the contradiction yet. Let me try to explain,
maybe everything becomes obvious.

A microthread is a chain of frames.
All microthreads are sitting "below" a scheduler,
which ties them all together to a common root.
So this is a little like a tree.

There is a single interpreter who does the scheduling
and the processing.
At any time, there is
- either one thread running, or
- the scheduler itself.

As long as this interpreter is running, scheduling takes place.
But against your assumption, this interpreter can of course
return. He leaves the uthread tree structure intact and jumps
out of the scheduler, back to the calling C function.
This is doable.

But then, all the frames of the uthread tree are in a defined
state, none is currently being executed, so none is locked.
We can now use any other interpreter instance that is
created and use it to restart the scheduling process.

Maybe this clarifies it:
We cannot mix different interpreter levels *at the same time*.
It is not possible to schedule from a nested interpreter,
sincce that one needs to be unwound before.
But stopping the interpreter is a perfect unwind, and we
can start again from anywhere.
Therefore, a call-back driven UI should be no problem.

Thanks for the good question, I did never competely
think it through before.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From nas at arctrix.com  Fri Mar 16 12:37:33 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 03:37:33 -0800
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 16, 2001 at 02:02:08AM -0500
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>
Message-ID: <20010316033733.A9366@glacier.fnational.com>

On Fri, Mar 16, 2001 at 02:02:08AM -0500, Tim Peters wrote:
> Does Jython have an explicit object akin to CPython's execution frame?  If
> so, 96.3% of what's needed for generators is already there.

FWIW, I think I almost have generators working after making
fairly minor changes to frameobject.c and ceval.c.  The only
remaining problem is that ceval likes to nuke f_valuestack.  The
hairy WHY_* logic is making this hard to fix.  Based on all the
conditionals it looks like it would be similer to put this code
in the switch statement.  That would probably speed up the
interpreter to boot.  Am I missing something or should I give it
a try?

  Neil



From nas at arctrix.com  Fri Mar 16 12:43:46 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 03:43:46 -0800
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <20010316033733.A9366@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 16, 2001 at 03:37:33AM -0800
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com> <20010316033733.A9366@glacier.fnational.com>
Message-ID: <20010316034346.B9366@glacier.fnational.com>

On Fri, Mar 16, 2001 at 03:37:33AM -0800, Neil Schemenauer wrote:
> Based on all the conditionals it looks like it would be similer
> to put this code in the switch statement.

s/similer/simpler.  Its early and I have the flu, okay? :-)

  Neil



From moshez at zadka.site.co.il  Fri Mar 16 14:18:43 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 16 Mar 2001 15:18:43 +0200
Subject: [Python-Dev] [Very Long (11K)] Numeric PEPs, first public posts
Message-ID: <E14du7v-0004Xn-00@darjeeling>

After the brouhaha at IPC9, it was decided that while PEP-0228 should stay
as a possible goal, there should be more concrete PEPs suggesting specific
changes in Python numerical model, with implementation suggestions and
migration paths fleshed out. So, there are four new PEPs now, all proposing
changes to Python's numeric model. There are some connections between them,
but each is supposed to be accepted or rejected according to its own merits.

To facilitate discussion, I'm including copies of the PEPs concerned
(for reference purposes, these are PEPs 0237-0240, and the latest public
version is always in the Python CVS under non-dist/peps/ . A reasonably
up to date version is linked from http://python.sourceforge.net)

Please direct all future discussion to python-numerics at lists.sourceforge.net
This list has been especially set-up to discuss those subjects.

PEP: 237
Title: Unifying Long Integers and Integers
Version: $Revision: 1.2 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Python has both integers (machine word size integral) types, and
    long integers (unbounded integral) types.  When integers
    operations overflow the machine registers, they raise an error.
    This PEP proposes to do away with the distinction, and unify the
    types from the perspective of both the Python interpreter and the
    C API.


Rationale

    Having the machine word size exposed to the language hinders
    portability.  For examples Python source files and .pyc's are not
    portable because of this.  Many programs find a need to deal with
    larger numbers after the fact, and changing the algorithms later
    is not only bothersome, but hinders performance in the normal
    case.


Literals

    A trailing 'L' at the end of an integer literal will stop having
    any meaning, and will be eventually phased out.  This will be done
    using warnings when encountering such literals.  The warning will
    be off by default in Python 2.2, on for 12 months, which will
    probably mean Python 2.3 and 2.4, and then will no longer be
    supported.


Builtin Functions

    The function long() will call the function int(), issuing a
    warning.  The warning will be off in 2.2, and on for two revisions
    before removing the function.  A FAQ will be added to explain that
    a solutions for old modules are:

         long=int

    at the top of the module, or:

         import __builtin__
         __builtin__.long=int

    In site.py.


C API

    All PyLong_As* will call PyInt_As*.  If PyInt_As* does not exist,
    it will be added.  Similarly for PyLong_From*.  A similar path of
    warnings as for the Python builtins will be followed.


Overflows

    When an arithmetic operation on two numbers whose internal
    representation is as machine-level integers returns something
    whose internal representation is a bignum, a warning which is
    turned off by default will be issued.  This is only a debugging
    aid, and has no guaranteed semantics.


Implementation

    The PyInt type's slot for a C long will be turned into a 

        union {
            long i;
            struct {
                unsigned long length;
                digit digits[1];
            } bignum;
        };

    Only the n-1 lower bits of the long have any meaning; the top bit
    is always set.  This distinguishes the union.  All PyInt functions
    will check this bit before deciding which types of operations to
    use.


Jython Issues

    Jython will have a PyInt interface which is implemented by both
    from PyFixNum and PyBigNum.


Open Issues

    What to do about sys.maxint?

    What to do about PyInt_AS_LONG failures?

    What do do about %u, %o, %x formatting operators?

    How to warn about << not cutting integers?

    Should the overflow warning be on a portable maximum size?

    Will unification of types and classes help with a more straightforward
    implementations?


Copyright

    This document has been placed in the public domain.


PEP: 238
Title: Non-integer Division
Version: $Revision: 1.1 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Dividing integers currently returns the floor of the quantities.
    This behavior is known as integer division, and is similar to what
    C and FORTRAN do.  This has the useful property that all
    operations on integers return integers, but it does tend to put a
    hump in the learning curve when new programmers are surprised that

        1/2 == 0

    This proposal shows a way to change this while keeping backward
    compatibility issues in mind.


Rationale

    The behavior of integer division is a major stumbling block found
    in user testing of Python.  This manages to trip up new
    programmers regularly and even causes the experienced programmer
    to make the occasional mistake.  The workarounds, like explicitly
    coercing one of the operands to float or use a non-integer
    literal, are very non-intuitive and lower the readability of the
    program.


// Operator

    A `//' operator which will be introduced, which will call the
    nb_intdivide or __intdiv__ slots.  This operator will be
    implemented in all the Python numeric types, and will have the
    semantics of

        a // b == floor(a/b)

    Except that the type of a//b will be the type a and b will be
    coerced into.  Specifically, if a and b are of the same type, a//b
    will be of that type too.


Changing the Semantics of the / Operator

    The nb_divide slot on integers (and long integers, if these are a
    separate type, but see PEP 237[1]) will issue a warning when given
    integers a and b such that

        a % b != 0

    The warning will be off by default in the 2.2 release, and on by
    default for in the next Python release, and will stay in effect
    for 24 months.  The next Python release after 24 months, it will
    implement

        (a/b) * b = a (more or less)

    The type of a/b will be either a float or a rational, depending on
    other PEPs[2, 3].


__future__

    A special opcode, FUTURE_DIV will be added that does the
    equivalent of:

        if type(a) in (types.IntType, types.LongType):
           if type(b) in (types.IntType, types.LongType):
               if a % b != 0:
                    return float(a)/b
        return a/b

    (or rational(a)/b, depending on whether 0.5 is rational or float).

    If "from __future__ import non_integer_division" is present in the
    module, until the IntType nb_divide is changed, the "/" operator
    is compiled to FUTURE_DIV.


Open Issues

    Should the // operator be renamed to "div"?


References

    [1] PEP 237, Unifying Long Integers and Integers, Zadka,
        http://python.sourceforge.net/peps/pep-0237.html

    [2] PEP 239, Adding a Rational Type to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0239.html

    [3] PEP 240, Adding a Rational Literal to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0240.html


Copyright

    This document has been placed in the public domain.


PEP: 239
Title: Adding a Rational Type to Python
Version: $Revision: 1.1 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Python has no numeric type with the semantics of an unboundedly
    precise rational number.  This proposal explains the semantics of
    such a type, and suggests builtin functions and literals to
    support such a type.  This PEP suggests no literals for rational
    numbers; that is left for another PEP[1].


Rationale

    While sometimes slower and more memory intensive (in general,
    unboundedly so) rational arithmetic captures more closely the
    mathematical ideal of numbers, and tends to have behavior which is
    less surprising to newbies.  Though many Python implementations of
    rational numbers have been written, none of these exist in the
    core, or are documented in any way.  This has made them much less
    accessible to people who are less Python-savvy.


RationalType

    There will be a new numeric type added called RationalType.  Its
    unary operators will do the obvious thing.  Binary operators will
    coerce integers and long integers to rationals, and rationals to
    floats and complexes.

    The following attributes will be supported: .numerator and
    .denominator.  The language definition will not define these other
    then that:

        r.denominator * r == r.numerator

    In particular, no guarantees are made regarding the GCD or the
    sign of the denominator, even though in the proposed
    implementation, the GCD is always 1 and the denominator is always
    positive.

    The method r.trim(max_denominator) will return the closest
    rational s to r such that abs(s.denominator) <= max_denominator.


The rational() Builtin

    This function will have the signature rational(n, d=1).  n and d
    must both be integers, long integers or rationals.  A guarantee is
    made that

        rational(n, d) * d == n


References

    [1] PEP 240, Adding a Rational Literal to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0240.html


Copyright

    This document has been placed in the public domain.


PEP: 240
Title: Adding a Rational Literal to Python
Version: $Revision: 1.1 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    A different PEP[1] suggests adding a builtin rational type to
    Python.  This PEP suggests changing the ddd.ddd float literal to a
    rational in Python, and modifying non-integer division to return
    it.


Rationale

    Rational numbers are useful, and are much harder to use without
    literals.  Making the "obvious" non-integer type one with more
    predictable semantics will surprise new programmers less then
    using floating point numbers.


Proposal

    Literals conforming to the regular expression '\d*.\d*' will be
    rational numbers.


Backwards Compatibility

    The only backwards compatible issue is the type of literals
    mentioned above.  The following migration is suggested:

    1. "from __future__ import rational_literals" will cause all such
       literals to be treated as rational numbers.

    2. Python 2.2 will have a warning, turned off by default, about
       such literals in the absence of a __future__ statement.  The
       warning message will contain information about the __future__
       statement, and indicate that to get floating point literals,
       they should be suffixed with "e0".

    3. Python 2.3 will have the warning turned on by default.  This
       warning will stay in place for 24 months, at which time the
       literals will be rationals and the warning will be removed.


References

    [1] PEP 239, Adding a Rational Type to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0239.html


Copyright

    This document has been placed in the public domain.



From nas at arctrix.com  Fri Mar 16 14:54:48 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 05:54:48 -0800
Subject: [Python-Dev] Simple generator implementation
In-Reply-To: <20010316033733.A9366@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 16, 2001 at 03:37:33AM -0800
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com> <20010316033733.A9366@glacier.fnational.com>
Message-ID: <20010316055448.A9591@glacier.fnational.com>

On Fri, Mar 16, 2001 at 03:37:33AM -0800, Neil Schemenauer wrote:
> ... it looks like it would be similer to put this code in the
> switch statement.

Um, no.  Bad idea.  Even if I could restructure the loop, try/finally
blocks mess everything up anyhow.

After searching through many megabytes of python-dev archives (grepmail
is my friend), I finally found the posts Tim was referring me to
(Subject: Generator details, Date: July 1999).  Guido and Tim already
had the answer for me.  Now:

    import sys

    def g():
        for n in range(10):
            suspend n, sys._getframe()
        return None, None

    n, frame = g()
    while frame:
        print n
        n, frame = frame.resume()

merrily prints 0 to 9 on stdout.  Whee!

  Neil



From aahz at panix.com  Fri Mar 16 17:51:54 2001
From: aahz at panix.com (aahz at panix.com)
Date: Fri, 16 Mar 2001 08:51:54 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 15, 2001 10:32:47 PM
Message-ID: <200103161651.LAA18978@panix2.panix.com>

> 2. the author of the original patch can make that decision. That would
>    mean that Fredrik Lundh can still install his code as-is, but I'd
>    have to ask Fred's permission.
> 
> 3. the bug release coordinator can make that decision. That means that
>    Aahz must decide.

I'm in favor of some combination of 2) and 3).
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From martin at loewis.home.cs.tu-berlin.de  Fri Mar 16 18:46:47 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 16 Mar 2001 18:46:47 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <200103161651.LAA18978@panix2.panix.com> (aahz@panix.com)
References: <200103161651.LAA18978@panix2.panix.com>
Message-ID: <200103161746.f2GHklZ00972@mira.informatik.hu-berlin.de>

> I'm in favor of some combination of 2) and 3).

So let's try this out: Is it ok to include the new fields on range
objects in 2.0.1?

Regards,
Martin




From mal at lemburg.com  Fri Mar 16 19:09:17 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 16 Mar 2001 19:09:17 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com>
Message-ID: <3AB256CD.AE35DDEC@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Just out of curiosity: is there a usable decimal type implementation
> > somewhere on the net which we could beat on ?
> 
> ftp://ftp.python.org/pub/python/
>     contrib-09-Dec-1999/DataStructures/FixedPoint.py

So my intuition wasn't wrong -- you had all this already implemented
years ago ;-)
 
> It's more than two years old, and regularly mentioned on c.l.py.  From the
> tail end of the module docstring:
> 
> """
> The following Python operators and functions accept FixedPoints in the
> expected ways:
> 
>     binary + - * / % divmod
>         with auto-coercion of other types to FixedPoint.
>         + - % divmod  of FixedPoints are always exact.
>         * / of FixedPoints may lose information to rounding, in
>             which case the result is the infinitely precise answer
>             rounded to the result's precision.
>         divmod(x, y) returns (q, r) where q is a long equal to
>             floor(x/y) as if x/y were computed to infinite precision,
>             and r is a FixedPoint equal to x - q * y; no information
>             is lost.  Note that q has the sign of y, and abs(r) < abs(y).
>     unary -
>     == != < > <= >=  cmp
>     min  max
>     float  int  long    (int and long truncate)
>     abs
>     str  repr
>     hash
>     use as dict keys
>     use as boolean (e.g. "if some_FixedPoint:" -- true iff not zero)
> """

Very impressive ! The code really show just how difficult it is
to get this done right (w/r to some definition of that term ;).

BTW, is the implementation ANSI/IEEE standards conform ?

> > I for one would be very interested in having a decimal type
> > around (with fixed precision and scale),
> 
> FixedPoint is unbounded "to the left" of the point but maintains a fixed and
> user-settable number of (decimal) digits "after the point".  You can easily
> subclass it to complain about overflow, or whatever other damn-fool thing you
> think is needed <wink>.

I'll probably leave that part to the database interface ;-) Since they
check for possible overlfows anyway, I think your model fits the
database world best.

Note that I will have to interface to database using the string
representation, so I might get away with adding scale and precision
parameters to a (new) asString() method.

> > since databases rely on these a lot and I would like to assure
> > that passing database data through Python doesn't cause any data
> > loss due to rounding issues.
> 
> Define your ideal API and maybe I can implement it someday.  My employer also
> has use for this.  FixedPoint.py is better suited to computation than I/O,
> though, since it uses Python longs internally, and conversion between
> BCD-like formats and Python longs is expensive.

See above: if string representations can be computed fast,
than the internal storage format is secondary.
 
> > If there aren't any such implementations yet, the site that Tim
> > mentioned  looks like a good starting point for heading into this
> > direction... e.g. for mx.Decimal ;-)
> >
> >       http://www2.hursley.ibm.com/decimal/
> 
> FYI, note that Cowlishaw is moving away from REXX's "string of ASCII digits"
> representation toward a variant of BCD encoding.

Hmm, ideal would be an Open Source C lib which could be used as
backend for the implementation... haven't found such a beast yet
and the IBM BigDecimal Java class doesn't really look attractive as
basis for a C++ reimplementation.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From aahz at panix.com  Fri Mar 16 19:29:29 2001
From: aahz at panix.com (aahz at panix.com)
Date: Fri, 16 Mar 2001 10:29:29 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 16, 2001 06:46:47 PM
Message-ID: <200103161829.NAA23971@panix6.panix.com>

> So let's try this out: Is it ok to include the new fields on range
> objects in 2.0.1?

My basic answer is "no".  This is complicated by the fact that the 2.22
patch on rangeobject.c *also* fixes the __contains__ bug [*].
Nevertheless, if I were the Patch Czar (and note the very, very
deliberate use of the subjunctive here), I'd probably tell whoever
wanted to fix the __contains__ bug to submit a new patch that does not
include the new xrange() attributes.


[*]  Whee!  I figured out how to browse CVS!  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From mal at lemburg.com  Fri Mar 16 21:29:59 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 16 Mar 2001 21:29:59 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com> <3AB256CD.AE35DDEC@lemburg.com>
Message-ID: <3AB277C7.28FE9B9B@lemburg.com>

Looking around some more on the web, I found that the GNU MP (GMP)
lib has switched from being GPLed to LGPLed, meaning that it
can actually be used by non-GPLed code as long as the source code
for the GMP remains publically accessible.

Some background which probably motivated this move can be found 
here:

  http://www.ptf.com/ptf/products/UNIX/current/0264.0.html
  http://www-inst.eecs.berkeley.edu/~scheme/source/stk/Mp/fgmp-1.0b5/notes

Since the GMP offers arbitrary precision numbers and also has
a rational number implementation I wonder if we could use it
in Python to support fractions and arbitrary precision
floating points ?!

Here's pointer to what the GNU MP has to offer:

  http://www.math.columbia.edu/online/gmp.html

The existing mpz module only supports MP integers, but support
for the other two types should only be a matter of hard work
;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From gward at python.net  Fri Mar 16 23:34:23 2001
From: gward at python.net (Greg Ward)
Date: Fri, 16 Mar 2001 17:34:23 -0500
Subject: [Python-Dev] Media spotting
Message-ID: <20010316173423.A20849@cthulhu.gerg.ca>

No doubt the Vancouver crowd has already seen this by now, but the rest
of you probably haven't.  From *The Globe and Mail*, March 15 2001, page
T5:

"""
Targeting people who work with computers but aren't programmers -- such
as data analysts, software testers, and Web masters -- ActivePerl comes
with telephone support and developer tools such as an "editor."  This
feature highlights mistakes made in a user's work -- similar to the
squiggly line that appears under spelling mistakes in Word documents.
"""

A-ha! so *that's* what editors are for!

        Greg

PS. article online at

  http://news.globetechnology.com/servlet/GAMArticleHTMLTemplate?tf=globetechnology/TGAM/NewsFullStory.html&cf=globetechnology/tech-config-neutral&slug=TWCOME&date=20010315

Apart from the above paragraph, it's pretty low on howlers.

-- 
Greg Ward - programmer-at-big                           gward at python.net
http://starship.python.net/~gward/
If you and a friend are being chased by a lion, it is not necessary to
outrun the lion.  It is only necessary to outrun your friend.



From sanner at scripps.edu  Sat Mar 17 02:43:23 2001
From: sanner at scripps.edu (Michel Sanner)
Date: Fri, 16 Mar 2001 17:43:23 -0800
Subject: [Python-Dev] import question
Message-ID: <1010316174323.ZM10134@noah.scripps.edu>

Hi, I didn't get any response on help-python.org so I figured I try these lists


if I have the follwoing packages hierarchy

A/
	__init__.py
        B/
		__init__.py
		C.py


I can use:

>>> from A.B import C

but if I use:

>>> import A
>>> print A
<module 'A' from 'A/__init__.pyc'>
>>> from A import B
print B
<module 'A.B' from 'A/B/__init__.py'>
>>> from B import C
Traceback (innermost last):
  File "<stdin>", line 1, in ?
ImportError: No module named B

in order to get this to work I have to

>>> import sys
>>> sys.modules['B'] = B

Is that expected ?
In the documentation I read:

"from" module "import" identifier

so I expected "from B import C" to be legal since B is a module

I tried this with Python 1.5.2 and 2.0 on an sgi under IRIX6.5

Thanks for any help

-Michel

-- 

-----------------------------------------------------------------------

>>>>>>>>>> AREA CODE CHANGE <<<<<<<<< we are now 858 !!!!!!!

Michel F. Sanner Ph.D.                   The Scripps Research Institute
Assistant Professor			Department of Molecular Biology
					  10550 North Torrey Pines Road
Tel. (858) 784-2341				     La Jolla, CA 92037
Fax. (858) 784-2860
sanner at scripps.edu                        http://www.scripps.edu/sanner
-----------------------------------------------------------------------




From guido at digicool.com  Sat Mar 17 03:13:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 16 Mar 2001 21:13:14 -0500
Subject: [Python-Dev] Re: [Import-sig] import question
In-Reply-To: Your message of "Fri, 16 Mar 2001 17:43:23 PST."
             <1010316174323.ZM10134@noah.scripps.edu> 
References: <1010316174323.ZM10134@noah.scripps.edu> 
Message-ID: <200103170213.VAA13856@cj20424-a.reston1.va.home.com>

> if I have the follwoing packages hierarchy
> 
> A/
> 	__init__.py
>         B/
> 		__init__.py
> 		C.py
> 
> 
> I can use:
> 
> >>> from A.B import C
> 
> but if I use:
> 
> >>> import A
> >>> print A
> <module 'A' from 'A/__init__.pyc'>
> >>> from A import B
> print B
> <module 'A.B' from 'A/B/__init__.py'>
> >>> from B import C
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
> ImportError: No module named B
> 
> in order to get this to work I have to
> 
> >>> import sys
> >>> sys.modules['B'] = B
> 
> Is that expected ?
> In the documentation I read:
> 
> "from" module "import" identifier
> 
> so I expected "from B import C" to be legal since B is a module
> 
> I tried this with Python 1.5.2 and 2.0 on an sgi under IRIX6.5
> 
> Thanks for any help
> 
> -Michel

In "from X import Y", X is not a reference to a name in your
namespace, it is a module name.  The right thing is indeed to write
"from A.B import C".  There's no way to shorten this; what you did
(assigning sys.modules['B'] = B) is asking for trouble.

Sorry!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From palisade at SirDrinkalot.rm-f.net  Sat Mar 17 03:37:54 2001
From: palisade at SirDrinkalot.rm-f.net (Palisade)
Date: Fri, 16 Mar 2001 18:37:54 -0800
Subject: [Python-Dev] PEP dircache.py core modification
Message-ID: <20010316183754.A7151@SirDrinkalot.rm-f.net>

This is my first exposure to the Python language, and I have found many things
to my liking. I have also noticed some quirks which I regard as assumption
flaws on part of the interpreter. The one I am interested in at the moment is
the assumption that we should leave the . and .. directory entries out of the
directory listing returned by os.listdir().

I have read the PEP specification and have thereby prepared a PEP for your
perusal. I hope you agree with me that this is both a philosophical issue
based in tradition as well as a duplication of effort problem that can be
readily solved with regards to backwards compatibility.

Thank you.

I have attached the PEP to this message.

Sincerely,
Nelson Rush

"This most beautiful system [The Universe] could only proceed from the
dominion of an intelligent and powerful Being."
-- Sir Isaac Newton
-------------- next part --------------
PEP: 
Title: os.listdir Full Directory Listing
Version: 
Author: palisade at users.sourceforge.net (Nelson Rush)
Status: 
Type: 
Created: 16/3/2001
Post-History: 

Introduction

    This PEP explains the need for two missing elements in the list returned
    by the os.listdir function.



Proposal

    It is obvious that having os.listdir() return a list with . and .. is
    going to cause many existing programs to function incorrectly. One
    solution to this problem could be to create a new function os.listdirall()
    or os.ldir() which returns every file and directory including the . and ..
    directory entries. Another solution could be to overload os.listdir's
    parameters, but that would unnecessarily complicate things.



Key Differences with the Existing Protocol

    The existing os.listdir() leaves out both the . and .. directory entries
    which are a part of the directory listing as is every other file.



Examples

    import os
    dir = os.ldir('/')
    for i in dir:
        print i

    The output would become:

    .
    ..
    lost+found
    tmp
    usr
    var
    WinNT
    dev
    bin
    home
    mnt
    sbin
    boot
    root
    man
    lib
    cdrom
    proc
    etc
    info
    pub
    .bash_history
    service



Dissenting Opinion

    During a discussion on Efnet #python, an objection was made to the
    usefulness of this implementation. Namely, that it is little extra
    effort to just insert these two directory entries into the list.

    Example:

    os.listdir() + ['.','..']

    An argument can be made however that the inclusion of both . and ..
    meet the standard way of listing files within directories. It is on
    basis of this common method between languages of listing directories
    that this tradition should be maintained.

    It was also suggested that not having . and .. returned in the list
    by default is required to be able to perform such actions as `cp * dest`.

    However, programs like `ls` and `cp` list and copy files excluding
    any directory that begins with a period. Therefore there is no need
    to clip . and .. from the directory list by default. Since anything
    beginning with a period is considered to be hidden.



Reference Implementation

    The reference implementation of the new dircache.py core ldir function
    extends listdir's functionality as proposed.

    http://palisade.rm-f.net/dircache.py



Copyright

    This document has been placed in the Public Domain.

From guido at digicool.com  Sat Mar 17 03:42:29 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 16 Mar 2001 21:42:29 -0500
Subject: [Python-Dev] PEP dircache.py core modification
In-Reply-To: Your message of "Fri, 16 Mar 2001 18:37:54 PST."
             <20010316183754.A7151@SirDrinkalot.rm-f.net> 
References: <20010316183754.A7151@SirDrinkalot.rm-f.net> 
Message-ID: <200103170242.VAA14061@cj20424-a.reston1.va.home.com>

Sorry, I see no merit in your proposal [to add "." and ".." back into
the output of os.listdir()].  You are overlooking the fact that the os
module in Python is intended to be a *portable* interface to operating
system functionality.  The presence of "." and ".." in a directory
listing is not supported on all platforms, e.g. not on Macintosh.

Also, my experience with using os.listdir() way back, when it *did*
return "." and "..", was that *every* program using os.listdir() had
to be careful to filter out "." and "..".  It simply wasn't useful to
include these.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paul at prescod.net  Sat Mar 17 03:56:27 2001
From: paul at prescod.net (Paul Prescod)
Date: Fri, 16 Mar 2001 18:56:27 -0800
Subject: [Python-Dev] Sourceforge FAQ
Message-ID: <3AB2D25B.FA724414@prescod.net>

Who maintains this document?

http://python.sourceforge.net/sf-faq.html#p1

I have some suggestions.

 1. Put an email address for comments like this in it.
 2. In the section on generating diff's, put in the right options for a
context diff
 3. My SF FAQ isn't there: how do I generate a diff that has a new file
as part of it?

 Paul Prescod



From nas at arctrix.com  Sat Mar 17 03:59:22 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 18:59:22 -0800
Subject: [Python-Dev] Simple generator implementation
Message-ID: <20010316185922.A11046@glacier.fnational.com>

Before I jump into the black whole of coroutines and
continuations, here's a patch to remember me by:

    http://arctrix.com/nas/python/generator1.diff

Bye bye.

  Neil



From tim.one at home.com  Sat Mar 17 06:40:49 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 00:40:49 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <3AB2D25B.FA724414@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>

[Paul Prescod]
> Who maintains this document?
>
> http://python.sourceforge.net/sf-faq.html#p1

Who maintains ceval.c?  Same deal:  anyone with time, commit access, and
something they want to change.

> I have some suggestions.
>
>  1. Put an email address for comments like this in it.

If you're volunteering, happy to put in *your* email address <wink>.

>  2. In the section on generating diff's, put in the right options for a
> context diff

The CVS source is

    python/nondist/sf-html/sf-faq.html

You also need to upload (scp) it to

    shell.sourceforge.net:/home/groups/python/htdocs/

after you've committed your changes.

>  3. My SF FAQ isn't there: how do I generate a diff that has a new file
> as part of it?

"diff -c" <wink -- but I couldn't make much sense of this question>.




From tim.one at home.com  Sat Mar 17 10:29:24 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 04:29:24 -0500
Subject: [Python-Dev] Re: WYSIWYG decimal fractions)
In-Reply-To: <3AB256CD.AE35DDEC@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEHJGAA.tim.one@home.com>

[M.-A. Lemburg, on FixedPoint.py]
> ...
> Very impressive ! The code really show just how difficult it is
> to get this done right (w/r to some definition of that term ;).

Yes and no.  Here's the "no" part:  I can do code like this in my sleep, due
to decades of experience.  So code like that isn't difficult at all for the
right person (yes, it *is* difficult if you don't already have the background
for it!  it's learnable, though <wink>).

Here's the "yes" part:  I have no experience with database or commercial
non-scientific applications, while people who do seem to have no clue about
how to *specify* what they need.  When I was writing FixedPoint.py, I asked
and asked what kind of rounding rules people needed, and what kind of
precision-propagation rules.  I got a grand total of 0 *useful* replies.  In
that sense it seems a lot like getting Python threads to work under HP-UX:
lots of people can complain, but no two HP-UX users agree on what's needed to
fix it.

In the end (for me), it *appeared* that there simply weren't any explicable
rules:  that among users of 10 different commerical apps, there were 20
different undocumented and proprietary legacy schemes for doing decimal fixed
and floats.  I'm certain I could implement any of them via trivial variations
of the FixedPoint.py code, but I couldn't get a handle on what exactly they
were.

> BTW, is the implementation ANSI/IEEE standards conform ?

Sure, the source code strictly conforms to the ANSI character set <wink>.

Which standards specifically do you have in mind?  The decimal portions of
the COBOL and REXX standards are concerned with how decimal arithmetic
interacts with language-specific features, while the 854 standard is
concerned with decimal *floating* point (which the astute reader may have
guessed FixedPoint.py does not address).  So it doesn't conform to any of
those.  Rounding, when needed, is done in conformance with the *default*
"when rounding is needed, round via nearest-or-even as if the intermediate
result were known to infinite precision" 854 rules.  But I doubt that many
commercial implementations of decimal arithmetic use that rule.

My much fancier Rational package (which I never got around to making
available) supports 9 rounding modes directly, and can be user-extended to
any number of others.  I doubt any of the builtin ones are in much use either
(for example, the builtin "round away from 0" and "round to nearest, or
towards minus infinity in case of tie" aren't even useful to me <wink>).

Today I like Cowlishaw's "Standard Decimal Arithmetic Specification" at

    http://www2.hursley.ibm.com/decimal/decspec.html

but have no idea how close that is to commerical practice (OTOH, it's
compatible w/ REXX, and lots of database-heads love REXX).

> ...
> Note that I will have to interface to database using the string
> representation, so I might get away with adding scale and precision
> parameters to a (new) asString() method.

As some of the module comments hint, FixedPoint.py started life with more
string gimmicks.  I ripped them out, though, for the same reason we *should*
drop thread support on HP-UX <0.6 wink>:  no two emails I got agreed on what
was needed, and the requests were mutually incompatible.  So I left a clean
base class for people to subclass as desired.

On 23 Dec 1999, Jim Fulton again raised "Fixed-decimal types" on Python-Dev.
I was on vacation & out of touch at the time.  Guido has surely forgotten
that he replied

    I like the idea of using the dd.ddL notation for this.

and will deny it if he reads this <wink>.

There's a long discussion after that -- look it up!  I see that I got around
to replying on 30 Dec 1999-- a repetition of this thread, really! --and
posted (Python) kernels for more flexible precision-control and rounding
policies than FixedPoint.py provided.

As is customary in the Python world, the first post that presented actual
code killed the discussion <wink/sigh> -- 'twas never mentioned again.

>> FixedPoint.py is better suited to computation than I/O, though,
>> since it uses Python longs internally, and conversion between
>> BCD-like formats and Python longs is expensive.

> See above: if string representations can be computed fast,

They cannot.  That was the point.  String representations *are* "BCD-like" to
me, in that they separate out each decimal digit.  To suck the individual
decimal digits out of a Python long requires a division by 10 for each digit.
Since people in COBOL routinely work with 32-digit decimal numbers, that's 32
*multi-precision* divisions by 10.  S-l-o-w.  You can play tricks like
dividing by 1000 instead, then use table lookup to get three digits at a
crack, but the overall process remains quadratic-time in the number of
digits.

Converting from a string of decimal digits to a Python long is also quadratic
time, so using longs as an internal representation is expensive in both
directions.

It is by far the cheapest way to do *computations*, though.  So I meant what
I said in all respects.

> ...
> Hmm, ideal would be an Open Source C lib which could be used as
> backend for the implementation... haven't found such a beast yet
> and the IBM BigDecimal Java class doesn't really look attractive as
> basis for a C++ reimplementation.

It's easy to find GPL'ed code for decimal arithmetic (for example, pick up
the Regina REXX implementation linked to from the Cowlishaw page).  For that
matter, you could just clone Python's longint code and fiddle the base to a
power of 10 (mutatis mutandis), and stick an exponent ("scale factor") on it.
This is harder than it sounds, but quite doable.

then-again-if-god-had-wanted-us-to-use-base-10-he-wouldn't-have-
    given-us-2-fingers-ly y'rs  - tim




From aahz at panix.com  Sat Mar 17 17:35:17 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 17 Mar 2001 08:35:17 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315233737.B29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:37:37 PM
Message-ID: <200103171635.LAA12321@panix2.panix.com>

>>     1. There must be zero syntax changes.  All .pyc and .pyo files
>>        must work (no regeneration needed) with all patch releases
>>        forked off from a feature release.
> 
> Hmm... Would making 'continue' work inside 'try' count as a bugfix or as a
> feature ? It's technically not a syntax change, but practically it is.
> (Invalid syntax suddenly becomes valid.) 

That's a good question.  The modifying sentence is the critical part:
would there be any change to the bytecodes generated?  Even if not, I'd
be inclined to reject it.

>>   Bug Fix Releases
>> 
>>     Bug fix releases are a subset of all patch releases; it is
>>     prohibited to add any features to the core in a bug fix release.
>>     A patch release that is not a bug fix release may contain minor
>>     feature enhancements, subject to the Prohibitions section.
> 
> I'm not for this 'bugfix release', 'patch release' difference. The
> numbering/naming convention is too confusing, not clear enough, and I don't
> see the added benifit of adding limited features. If people want features,
> they should go and get a feature release. The most important bit in patch
> ('bugfix') releases is not to add more bugs, and rewriting parts of code to
> fix a bug is something that is quite likely to insert more bugs. Sure, as
> the patch coder, you are probably certain there are no bugs -- but so was
> whoever added the bug in the first place :)

As I said earlier, the primary motivation for going this route was the
ambiguous issue of case-sensitive imports.  (Similar issues are likely
to crop up.)

>>     The Patch Czar decides when there are a sufficient number of
>>     patches to warrant a release.  The release gets packaged up,
>>     including a Windows installer, and made public as a beta release.
>>     If any new bugs are found, they must be fixed and a new beta
>>     release publicized.  Once a beta cycle completes with no new bugs
>>     found, the package is sent to PythonLabs for certification and
>>     publication on python.org.
> 
>>     Each beta cycle must last a minimum of one month.
> 
> This process probably needs a firm smack with reality, but that would have
> to wait until it meets some, first :) Deciding when to do a bugfix release
> is very tricky: some bugs warrant a quick release, but waiting to assemble
> more is generally a good idea. The whole beta cycle and windows
> installer/RPM/etc process is also a bottleneck. Will Tim do the Windows
> Installer (or whoever does it for the regular releases) ? If he's building
> the installer anyway, why can't he 'bless' the release right away ?

Remember that all bugfixes are available as patches off of SourceForge.
Anyone with a truly critical need is free to download the patch and
recompile.  Overall, I see patch releases as coinciding with feature
releases so that people can concentrate on doing the same kind of work
at the same time.

> I'm also not sure if a beta cycle in a bugfix release is really necessary,
> especially a month long one. Given that we have a feature release planned
> each 6 months, and a feature release has generally 2 alphas and 2 betas,
> plus sometimes a release candidate, plus the release itself, and a bugfix
> release would have one or two betas too, and say that we do two betas in
> those six months, that would make 10+ 'releases' of various form in those 6
> months. Ain't no-one[*] going to check them out for a decent spin, they'll
> just wait for the final version.

That's why I'm making the beta cycle artificially long (I'd even vote
for a two-month minimum).  It slows the release pace and -- given the
usually high quality of Python betas -- it encourages people to try them
out.  I believe that we *do* need patch betas for system testing.

>>     Should the first patch release following any feature release be
>>     required to be a bug fix release?  (Aahz proposes "yes".)
>>     Is it allowed to do multiple forks (e.g. is it permitted to have
>>     both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)
>>     Does it makes sense for a bug fix release to follow a patch
>>     release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)
> 
> More reasons not to have separate featurebugfixreleasethingies and
> bugfix-releases :)

Fair enough.

>>     What is the equivalent of python-dev for people who are
>>     responsible for maintaining Python?  (Aahz proposes either
>>     python-patch or python-maint, hosted at either python.org or
>>     xs4all.net.)
> 
> It would probably never be hosted at .xs4all.net. We use the .net address
> for network related stuff, and as a nice Personality Enhancer (read: IRC
> dick extender) for employees. We'd be happy to host stuff, but I would
> actually prefer to have it under a python.org or some other python-related
> domainname. That forestalls python questions going to admin at xs4all.net :) A
> small logo somewhere on the main page would be nice, but stuff like that
> should be discussed if it's ever an option, not just because you like the
> name 'XS4ALL' :-)

Okay, I didn't mean to imply that it would literally be @xs4all.net.

>>     Does SourceForge make it possible to maintain both separate and
>>     combined bug lists for multiple forks?  If not, how do we mark
>>     bugs fixed in different forks?  (Simplest is to simply generate a
>>     new bug for each fork that it gets fixed in, referring back to the
>>     main bug number for details.)
> 
> We could make it a separate SF project, just for the sake of keeping
> bugreports/fixes in the maintenance branch and the head branch apart. The
> main Python project already has an unwieldy number of open bugreports and
> patches.

That was one of my thoughts, but I'm not entitled to an opinion (I don't
have an informed opinion ;-).

> I'm also for starting the maintenance branch right after the real release,
> and start adding bugfixes to it right away, as soon as they show up. Keeping
> up to date on bufixes to the head branch is then as 'simple' as watching
> python-checkins. (Up until the fact a whole subsystem gets rewritten, that
> is :) People should still be able to submit bugfixes for the maintenance
> branch specifically.

That is *precisely* why my original proposal suggested that only the N-1
release get patch attention, to conserve effort.  It is also why I
suggested that patch releases get hooked to feature releases.

> And I'm still willing to be the patch monkey, though I don't think I'm the
> only or the best candidate. I'll happily contribute regardless of who gets
> the blame :)

If you're willing to do the work, I'd love it if you were the official
Patch Czar.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From ping at lfw.org  Sat Mar 17 23:00:22 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 17 Mar 2001 14:00:22 -0800 (PST)
Subject: [Python-Dev] Scoping (corner cases)
Message-ID: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>

Hey there.

What's going on here?

    Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> x = 1
    >>> class Foo:
    ...     print x
    ... 
    1
    >>> class Foo:  
    ...     print x
    ...     x = 1
    ... 
    1
    >>> class Foo:
    ...     print x
    ...     x = 2
    ...     print x
    ... 
    1
    2
    >>> x
    1

Can we come up with a consistent story on class scopes for 2.1?



-- ?!ng




From guido at digicool.com  Sat Mar 17 23:19:52 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 17 Mar 2001 17:19:52 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: Your message of "Sat, 17 Mar 2001 14:00:22 PST."
             <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org> 
References: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org> 
Message-ID: <200103172219.RAA16377@cj20424-a.reston1.va.home.com>

> What's going on here?
> 
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 1
>     >>> class Foo:
>     ...     print x
>     ... 
>     1
>     >>> class Foo:  
>     ...     print x
>     ...     x = 1
>     ... 
>     1
>     >>> class Foo:
>     ...     print x
>     ...     x = 2
>     ...     print x
>     ... 
>     1
>     2
>     >>> x
>     1
> 
> Can we come up with a consistent story on class scopes for 2.1?

They are consistent with all past versions of Python.

Class scopes don't work like function scopes -- they use LOAD_NAME and
STORE_NAME.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Sat Mar 17 03:16:23 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Fri, 16 Mar 2001 21:16:23 -0500 (EST)
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <200103172219.RAA16377@cj20424-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
	<200103172219.RAA16377@cj20424-a.reston1.va.home.com>
Message-ID: <15026.51447.862936.753570@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  >> Can we come up with a consistent story on class scopes for 2.1?

  GvR> They are consistent with all past versions of Python.

Phew!

  GvR> Class scopes don't work like function scopes -- they use
  GvR> LOAD_NAME and STORE_NAME.

Class scopes are also different because a block's free variables are
not resolved in enclosing class scopes.  We'll need to make sure the
doc says that class scopes and function scopes are different.

Jeremy




From tim.one at home.com  Sat Mar 17 23:31:08 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 17:31:08 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFLJGAA.tim.one@home.com>

[Ka-Ping Yee]
> What's going on here?
>
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43)
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 1
>     >>> class Foo:
>     ...     print x
>     ...
>     1
>     >>> class Foo:
>     ...     print x

IMO, this one should have yielded an UnboundLocalError at runtime.  "A class
definition is a code block", and has a local namespace that's supposed to
follow the namespace rules; since x is bound to on the next line, x should be
a local name within the class body.

>     ...     x = 1
>     ...
>     1
>     >>> class Foo:
>     ...     print x

Ditto.

>     ...     x = 2
>     ...     print x
>     ...
>     1
>     2
>     >>> x
>     1
>
> Can we come up with a consistent story on class scopes for 2.1?

The story is consistent but the implementation is flawed <wink>.  Please open
a bug report; I wouldn't consider it high priority, though, as this is
unusual stuff to do in a class definition.




From tim.one at home.com  Sat Mar 17 23:33:07 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 17:33:07 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <15026.51447.862936.753570@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEFMJGAA.tim.one@home.com>

[Guido]
> Class scopes don't work like function scopes -- they use
> LOAD_NAME and STORE_NAME.

[Jeremy]
> Class scopes are also different because a block's free variables are
> not resolved in enclosing class scopes.  We'll need to make sure the
> doc says that class scopes and function scopes are different.

Yup.  Since I'll never want to do stuff like this, I don't really care a heck
of a lot what it does; but it should be documented!

What does Jython do with these?




From thomas at xs4all.net  Sun Mar 18 00:01:09 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:01:09 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103171635.LAA12321@panix2.panix.com>; from aahz@panix.com on Sat, Mar 17, 2001 at 08:35:17AM -0800
References: <20010315233737.B29286@xs4all.nl> <200103171635.LAA12321@panix2.panix.com>
Message-ID: <20010318000109.M27808@xs4all.nl>

On Sat, Mar 17, 2001 at 08:35:17AM -0800, aahz at panix.com wrote:

> Remember that all bugfixes are available as patches off of SourceForge.

I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
true, it's very not true. A lot of the patches applied are either never
submitted to SF (because it's the 'obvious fix' by one of the commiters) or
are modified to some extent from thh SF patch proposed. (Often
formatting/code style, fairly frequently symbol renaming, and not too
infrequently changes in the logic for various reasons.)

> > ... that would make 10+ 'releases' of various form in those 6 months.
> > Ain't no-one[*] going to check them out for a decent spin, they'll just
> > wait for the final version.

> That's why I'm making the beta cycle artificially long (I'd even vote
> for a two-month minimum).  It slows the release pace and -- given the
> usually high quality of Python betas -- it encourages people to try them
> out.  I believe that we *do* need patch betas for system testing.

But having a patch release once every 6 months negates the whole purpose of
patch releases :) If you are in need of a bugfix, you don't want to wait
three months before a bugfix release beta with your specific bug fixed is
going to be released, and you don't want to wait two months more for the
release to become final. (Note: we're talking people who don't want to use
the next feature release beta or current CVS version, so they aren't likely
to try a bugfix release beta either.) Bugfix releases should come often-ish,
compared to feature releases. But maybe we can get the BDFL to slow the pace
of feature releases instead ? Is the 6-month speedway really appropriate if
we have a separate bugfix release track ?

> > I'm also for starting the maintenance branch right after the real release,
> > and start adding bugfixes to it right away, as soon as they show up. Keeping
> > up to date on bufixes to the head branch is then as 'simple' as watching
> > python-checkins. (Up until the fact a whole subsystem gets rewritten, that
> > is :) People should still be able to submit bugfixes for the maintenance
> > branch specifically.

> That is *precisely* why my original proposal suggested that only the N-1
> release get patch attention, to conserve effort.  It is also why I
> suggested that patch releases get hooked to feature releases.

There is no technical reason to do just N-1. You can branch of as often as
you want (in fact, branches never disappear, so if we were building 3.5 ten
years from now (and we would still be using CVS <wink GregS>) we could apply
a specific patch to the 2.0 maintenance branch and release 2.0.128, if need
be.)

Keeping too many maintenance branches active does bring the administrative
nightmare with it, of course. We can start with just N-1 and see where it
goes from there. If significant numbers of people are still using 2.0.5 when
2.2 comes out, we might have to reconsider.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Sun Mar 18 00:26:45 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:26:45 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>; from tim.one@home.com on Sat, Mar 17, 2001 at 12:40:49AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
Message-ID: <20010318002645.H29286@xs4all.nl>

On Sat, Mar 17, 2001 at 12:40:49AM -0500, Tim Peters wrote:

> >  3. My SF FAQ isn't there: how do I generate a diff that has a new file
> > as part of it?

> "diff -c" <wink -- but I couldn't make much sense of this question>.

What Paul means is that he's added a new file to his tree, and wants to send
in a patch that includes that file. Unfortunately, CVS can't do that :P You
have two choices:

- 'cvs add' the file, but don't commit. This is kinda lame since it requires
 commit access, and it creates the administrativia for the file already. I
 *think* that if you do this, only you can actually add the file (after the
 patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
 show the file (as all +'es, obviously) even though it will complain to
 stderr about its ignorance about that specific file.

- Don't use cvs diff. Use real diff instead. Something like this:

  mv your tree asside, (can just mv your 'src' dir to 'src.mypatch' or such)
  cvs update -d,
  make distclean in your old tree,
  diff -crN --exclude=CVS src src.mypatch > mypatch.diff

 Scan your diff for bogus files, delete the sections by hand or if there are
 too many of them, add more --exclude options to your diff. I usually use
 '--exclude=".#*"' as well, and I forget what else.  By the way, for those
 who don't know it yet, an easy way to scan the patch is using 'diffstat'.

Note that to *apply* a patch like that (one with a new file), you need a
reasonably up-to-date GNU 'patch'.

I haven't added all this to the SF FAQ because, uhm, well, I consider them
lame hacks. I've long suspected there was a better way to do this, but I
haven't found it or even heard rumours about it yet. We should probably add
it to the FAQ anyway (just the 2nd option, though.)

Of course, there is a third way: write your own diff >;> It's not that hard,
really :) 

diff -crN ....
*** <name of file>      Thu Jan  1 01:00:00 1970
--- <name of file>      <timestamp of file>
***************
*** 0 ****
--- 1,<number of lines in file> ----
<file, each line prefixed by '+ '>

You can just insert this chunk (with an Index: line and some fake RCS cruft,
if you want -- patch doesn't use it anyway, IIRC) somewhere in your patch
file.

A couple of weeks back, while on a 10-hour nighttime spree to fix all our
SSH clients and daemons to openssh 2.5 where possible and a handpatched ssh1
where necessary, I found myself unconciously writing diffs instead of
editing source and re-diffing the files, because I apparently thought it was
faster (it was, too.) Scarily enough, I got all the linenumbers and such
correct, and patch didn't whine about them at all ;)

I haven't added all this to the SF FAQ because, uhm, well, I consider them
lame hacks. I've long suspected there was a better way to do this, but I
haven't found it or even heard rumours about it yet.

Sign-o-the-nerdy-times-I-guess-ly y'rs ;)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim.one at home.com  Sun Mar 18 00:49:22 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 18:49:22 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <20010318002645.H29286@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>

[Pual]
>>>  3. My SF FAQ isn't there: how do I generate a diff that has a new file
>>>     as part of it?

[TIm]
>> "diff -c" <wink -- but I couldn't make much sense of this question>.

[Thomas]
> What Paul means is that he's added a new file to his tree, and
> wants to send in a patch that includes that file.

Ya, I picked that up after Martin explained it.  Best I could make out was
that Paul had written his own SF FAQ document and wanted to know how to
generate a diff that incorporated it as "a new file" into the existing SF
FAQ.  But then I've been severely sleep-deprived most of the last week
<0.zzzz wink>.

> ...
> - Don't use cvs diff. Use real diff instead. Something like this:
>
>   mv your tree asside, (can just mv your 'src' dir to
>                         'src.mypatch' or such)
>   cvs update -d,
>   make distclean in your old tree,
>   diff -crN --exclude=CVS src src.mypatch > mypatch.diff
>
> Scan your diff for bogus files, delete the sections by hand or if
> there are too many of them, add more --exclude options to your diff. I
> usually use '--exclude=".#*"' as well, and I forget what else.  By the
> away, for those who don't know it yet, an easy way to scan the patch is
> using 'diffstat'.
>
> Note that to *apply* a patch like that (one with a new file), you need a
> reasonably up-to-date GNU 'patch'.
> ...

I'm always amused that Unix users never allow the limitations of their tools
to convince them to do something obvious instead.

on-windows-you-just-tell-tim-to-change-the-installer<wink>-ly y'rs  - tim




From thomas at xs4all.net  Sun Mar 18 00:58:40 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:58:40 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>; from tim.one@home.com on Sat, Mar 17, 2001 at 06:49:22PM -0500
References: <20010318002645.H29286@xs4all.nl> <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>
Message-ID: <20010318005840.K29286@xs4all.nl>

On Sat, Mar 17, 2001 at 06:49:22PM -0500, Tim Peters wrote:

> I'm always amused that Unix users never allow the limitations of their tools
> to convince them to do something obvious instead.

What would be the obvious thing ? Just check it in ? :-)
Note that CVS's dinkytoy attitude did prompt several people to do the
obvious thing: they started to rewrite it from scratch. Greg Stein jumped in
with those people to help them out on the touch infrastructure decisions,
which is why one of my *other* posts that mentioned CVS did a <wink GregS>
;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim.one at home.com  Sun Mar 18 01:17:06 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 19:17:06 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <20010318005840.K29286@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFOJGAA.tim.one@home.com>

[Thomas Wouters]
> What would be the obvious thing ? Just check it in ? :-)

No:  as my signoff line implied, switch to Windows and tell Tim to deal with
it.  Works for everyone except me <wink>!  I was just tweaking you.  For a
patch on SF, it should be enough to just attach the new files and leave a
comment saying where they belong.

> Note that CVS's dinkytoy attitude did prompt several people to do the
> obvious thing: they started to rewrite it from scratch. Greg Stein
> jumped in with those people to help them out on the touch infrastructure
> decisions, which is why one of my *other* posts that mentioned CVS did a
> <wink GregS>
> ;)

Yup, *that* I picked up.

BTW, I'm always amused that Unix users never allow the lateness of their
rewrite-from-scratch boondoggles to convince them to do something obvious
instead.

wondering-how-many-times-someone-will-bite-ly y'rs  - tim




From pedroni at inf.ethz.ch  Sun Mar 18 01:27:48 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 01:27:48 +0100
Subject: [Python-Dev] Scoping (corner cases)
References: <LNBBLJKPBEHFEDALKOLCAEFMJGAA.tim.one@home.com>
Message-ID: <3AB40104.8020109@inf.ethz.ch>

Hi.

Tim Peters wrote:

> [Guido]
> 
>> Class scopes don't work like function scopes -- they use
>> LOAD_NAME and STORE_NAME.
> 
> 
> [Jeremy]
> 
>> Class scopes are also different because a block's free variables are
>> not resolved in enclosing class scopes.  We'll need to make sure the
>> doc says that class scopes and function scopes are different.
> 
> 
> Yup.  Since I'll never want to do stuff like this, I don't really care a heck
> of a lot what it does; but it should be documented!
> 
> What does Jython do with these?

The  jython codebase (prior and post to my nested scopes changes) does 
exactly the same as python, in fact something
equivalent to LOAD_NAME and SET_NAME is used in class scopes.

regards




From pedroni at inf.ethz.ch  Sun Mar 18 02:17:47 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 02:17:47 +0100
Subject: [Python-Dev] Icon-style generators vs. jython
References: <LNBBLJKPBEHFEDALKOLCAEFLJGAA.tim.one@home.com>
Message-ID: <3AB40CBB.2050308@inf.ethz.ch>

>   

This is very prelimary, no time to read details, try things or look at 
Neil's impl.

As far as I have understood Icon generators are function with normal 
entry, exit points and multiple suspension points:
at a suspension point an eventual impl should save the cur frame status  
somehow inside the function obj together with the information
where the function should restart and then normally return a value or 
nothing.

In jython we have frames, and function are encapsulated in objects so 
the whole should be doable (with some effort), I expect that we can deal
with the multi-entry-points with a jvm switch bytecode. Entry code or 
function dispatch code should handle restarting (we already have
a code the manages frame creation and function dispatch on every python 
call).

There could be a problem with jythonc (the jython-to-java compiler) 
because it produces java source code and not directly bytecode,
because at source level AFAIK in java one cannot intermangle switches 
and other ctrl structures, so how to deal with multiple entry points.
(no goto ;)). We would have to rewrite it to produce bytecode directly.

What is expected behaviour wrt threads, should generators be reentrant 
(that mean that frame and restart info should be saved on a thread basis)
or are they somehow global active objects so if thread 1 call a 
generator that suspend then thread 2 will reenter it after the 
suspension point?

Freezing more than a frame is not directly possible in jython, frames 
are pushed and popped on java stack and function calls pass through
java calling mechanism. (I imagine you need a separate thread for doing 
that).

regards.




From tim.one at home.com  Sun Mar 18 02:36:40 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 20:36:40 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com>

FYI, I pointed a correspondent to Neil's new generator patch (among other
things), and got this back.  Not being a Web Guy at heart, I don't have a
clue about XSLT (just enough to know that 4-letter acronyms are a webb
abomination <wink>).

Note:  in earlier correspondence, the generator idea didn't seem to "click"
until I called them "resumable functions" (as I often did in the past, but
fell out of the habit).  People new to the concept often pick that up
quicker, or even, as in this case, remember that they once rolled such a
thing by hand out of prior necessity.

Anyway, possibly food for thought if XSLT means something to you ...


-----Original Message-----
From: XXX
Sent: Saturday, March 17, 2001 8:09 PM
To: Tim Peters
Subject: Re: FW: [Python-Dev] Simple generator implementation


On Sat, 17 Mar 2001, Tim Peters wrote:
> It's been done at least three times by now, most recently yesterday(!):

Thanks for the pointer.  I've started to read some
of the material you pointed me to... generators
are indeed very interesting.  They are what is
needed for an efficient implementation of XSLT.
(I was part of an XSLT implementation team that had to
dream up essentially the same solution). This is
all very cool.  Glad to see that I'm just re-inventing
the wheel.  Let's get generators in Python!

;) XXX




From paulp at ActiveState.com  Sun Mar 18 02:50:39 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sat, 17 Mar 2001 17:50:39 -0800
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com>
Message-ID: <3AB4146E.62AE3299@ActiveState.com>

I would call what you need for an efficient XSLT implementation "lazy
lists." They are never infinite but you would rather not pre-compute
them in advance. Often you use only the first item. Iterators probably
would be a good implementation technique.
-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From nas at arctrix.com  Sun Mar 18 03:17:41 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Sat, 17 Mar 2001 18:17:41 -0800
Subject: [Python-Dev] Simple generators, round 2
Message-ID: <20010317181741.B12195@glacier.fnational.com>

I've got a different implementation.  There are no new keywords
and its simpler to wrap a high level interface around the low
interface.

    http://arctrix.com/nas/python/generator2.diff

What the patch does:

    Split the big for loop and switch statement out of eval_code2
    into PyEval_EvalFrame.

    Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
    WHY_RETURN except that the frame value stack and the block stack
    are not touched.  The frame is also marked resumable before
    returning (f_stackbottom != NULL).

    Add two new methods to frame objects, suspend and resume.
    suspend takes one argument which gets attached to the frame
    (f_suspendvalue).  This tells ceval to suspend as soon as control
    gets back to this frame.  resume, strangely enough, resumes a
    suspended frame.  Execution continues at the point it was
    suspended.  This is done by calling PyEval_EvalFrame on the frame
    object.

    Make frame_dealloc clean up the stack and decref f_suspendvalue
    if it exists.

There are probably still bugs and it slows down ceval too much
but otherwise things are looking good.  Here are some examples
(the're a little long and but illustrative).  Low level
interface, similar to my last example:

    # print 0 to 999
    import sys

    def g():
        for n in range(1000):
            f = sys._getframe()
            f.suspend((n, f))
        return None, None

    n, frame = g()
    while frame:
        print n
        n, frame = frame.resume()

Let's build something easier to use:

    # Generator.py
    import sys

    class Generator:
        def __init__(self):
            self.frame = sys._getframe(1)
            self.frame.suspend(self)
            
        def suspend(self, value):
            self.frame.suspend(value)

        def end(self):
            raise IndexError

        def __getitem__(self, i):
            # fake indices suck, need iterators
            return self.frame.resume()

Now let's try Guido's pi example now:

    # Prints out the frist 100 digits of pi
    from Generator import Generator

    def pi():
        g = Generator()
        k, a, b, a1, b1 = 2L, 4L, 1L, 12L, 4L
        while 1:
            # Next approximation
            p, q, k = k*k, 2L*k+1L, k+1L
            a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
            # Print common digits
            d, d1 = a/b, a1/b1
            while d == d1:
                g.suspend(int(d))
                a, a1 = 10L*(a%b), 10L*(a1%b1)
                d, d1 = a/b, a1/b1

    def test():
        pi_digits = pi()
        for i in range(100):
            print pi_digits[i],

    if __name__ == "__main__":
        test()

Some tree traversals:

    from types import TupleType
    from Generator import Generator

    # (A - B) + C * (E/F)
    expr = ("+", 
             ("-", "A", "B"),
             ("*", "C",
                  ("/", "E", "F")))
               
    def postorder(node):
        g = Generator()
        if isinstance(node, TupleType):
            value, left, right = node
            for child in postorder(left):
                g.suspend(child)
            for child in postorder(right):
                g.suspend(child)
            g.suspend(value)
        else:
            g.suspend(node)
        g.end()

    print "postorder:",
    for node in postorder(expr):
        print node,
    print

This prints:

    postorder: A B - C E F / * +

Cheers,

  Neil



From aahz at panix.com  Sun Mar 18 07:31:39 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 17 Mar 2001 22:31:39 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010318000109.M27808@xs4all.nl> from "Thomas Wouters" at Mar 18, 2001 12:01:09 AM
Message-ID: <200103180631.BAA03321@panix3.panix.com>

>> Remember that all bugfixes are available as patches off of SourceForge.
> 
> I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
> true, it's very not true. A lot of the patches applied are either never
> submitted to SF (because it's the 'obvious fix' by one of the commiters) or
> are modified to some extent from thh SF patch proposed. (Often
> formatting/code style, fairly frequently symbol renaming, and not too
> infrequently changes in the logic for various reasons.)

I'm thinking one of us is confused.  CVS is hosted at SourceForge,
right?  People can download specific parts of Python from SF?  And we're
presuming there will be a specific fork that patches are checked in to?
So in what way is my statement not true?

>>> ... that would make 10+ 'releases' of various form in those 6 months.
>>> Ain't no-one[*] going to check them out for a decent spin, they'll just
>>> wait for the final version.
>> 
>> That's why I'm making the beta cycle artificially long (I'd even vote
>> for a two-month minimum).  It slows the release pace and -- given the
>> usually high quality of Python betas -- it encourages people to try them
>> out.  I believe that we *do* need patch betas for system testing.
> 
> But having a patch release once every 6 months negates the whole
> purpose of patch releases :) If you are in need of a bugfix, you
> don't want to wait three months before a bugfix release beta with
> your specific bug fixed is going to be released, and you don't want
> to wait two months more for the release to become final. (Note: we're
> talking people who don't want to use the next feature release beta or
> current CVS version, so they aren't likely to try a bugfix release
> beta either.) Bugfix releases should come often-ish, compared to
> feature releases. But maybe we can get the BDFL to slow the pace of
> feature releases instead ? Is the 6-month speedway really appropriate
> if we have a separate bugfix release track ?

Well, given that neither of us is arguing on the basis of actual
experience with Python patch releases, there's no way we can prove one
point of view as being better than the other.  Tell you what, though:
take the job of Patch Czar, and I'll follow your lead.  I'll just
reserve the right to say "I told you so".  ;-)

>>> I'm also for starting the maintenance branch right after the real release,
>>> and start adding bugfixes to it right away, as soon as they show up. Keeping
>>> up to date on bufixes to the head branch is then as 'simple' as watching
>>> python-checkins. (Up until the fact a whole subsystem gets rewritten, that
>>> is :) People should still be able to submit bugfixes for the maintenance
>>> branch specifically.
> 
>> That is *precisely* why my original proposal suggested that only the N-1
>> release get patch attention, to conserve effort.  It is also why I
>> suggested that patch releases get hooked to feature releases.
> 
> There is no technical reason to do just N-1. You can branch of as often as
> you want (in fact, branches never disappear, so if we were building 3.5 ten
> years from now (and we would still be using CVS <wink GregS>) we could apply
> a specific patch to the 2.0 maintenance branch and release 2.0.128, if need
> be.)

No technical reason, no.  It's just that N-1 is going to be similar
enough to N, particularly for any given bugfix, that it should be
"trivial" to keep the bugfixes in synch.  That's all.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From esr at snark.thyrsus.com  Sun Mar 18 07:46:28 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 18 Mar 2001 01:46:28 -0500
Subject: [Python-Dev] Followup on freezetools error
Message-ID: <200103180646.f2I6kSV16765@snark.thyrsus.com>

OK, so following Guido's advice I did a CVS update and reinstall and
then tried a freeze on the CML2 compiler.  Result:

Traceback (most recent call last):
  File "freezetools/freeze.py", line 460, in ?
    main()
  File "freezetools/freeze.py", line 321, in main
    mf.import_hook(mod)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 302, in scan_code
    self.scan_code(c, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 288, in scan_code
    assert lastname is not None
AssertionError
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Question with boldness even the existence of a God; because, if there
be one, he must more approve the homage of reason, than that of
blindfolded fear.... Do not be frightened from this inquiry from any
fear of its consequences. If it ends in the belief that there is no
God, you will find incitements to virtue in the comfort and
pleasantness you feel in its exercise...
	-- Thomas Jefferson, in a 1787 letter to his nephew



From esr at snark.thyrsus.com  Sun Mar 18 08:06:08 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 18 Mar 2001 02:06:08 -0500
Subject: [Python-Dev] Re: Followup on freezetools error
Message-ID: <200103180706.f2I768q17436@snark.thyrsus.com>

Cancel previous complaint.  Pilot error.  I think I'm going to end up
writing some documentation for this puppy...
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

You know why there's a Second Amendment?  In case the government fails to
follow the first one.
         -- Rush Limbaugh, in a moment of unaccustomed profundity 17 Aug 1993



From pedroni at inf.ethz.ch  Sun Mar 18 13:01:40 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 13:01:40 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com>
Message-ID: <001901c0afa3$322094e0$f979fea9@newmexico>

This kind of low level impl. where suspension points are known at runtime only,
cannot be implemented in jython
(at least not in a non costly and reasonable way).
Jython codebase is likely to just allow generators with suspension points known
at compilation time.

regards.

----- Original Message -----
From: Neil Schemenauer <nas at arctrix.com>
To: <python-dev at python.org>
Sent: Sunday, March 18, 2001 3:17 AM
Subject: [Python-Dev] Simple generators, round 2


> I've got a different implementation.  There are no new keywords
> and its simpler to wrap a high level interface around the low
> interface.
>
>     http://arctrix.com/nas/python/generator2.diff
>
> What the patch does:
>
>     Split the big for loop and switch statement out of eval_code2
>     into PyEval_EvalFrame.
>
>     Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
>     WHY_RETURN except that the frame value stack and the block stack
>     are not touched.  The frame is also marked resumable before
>     returning (f_stackbottom != NULL).
>
>     Add two new methods to frame objects, suspend and resume.
>     suspend takes one argument which gets attached to the frame
>     (f_suspendvalue).  This tells ceval to suspend as soon as control
>     gets back to this frame.  resume, strangely enough, resumes a
>     suspended frame.  Execution continues at the point it was
>     suspended.  This is done by calling PyEval_EvalFrame on the frame
>     object.
>
>     Make frame_dealloc clean up the stack and decref f_suspendvalue
>     if it exists.
>
> There are probably still bugs and it slows down ceval too much
> but otherwise things are looking good.  Here are some examples
> (the're a little long and but illustrative).  Low level
> interface, similar to my last example:
>
>     # print 0 to 999
>     import sys
>
>     def g():
>         for n in range(1000):
>             f = sys._getframe()
>             f.suspend((n, f))
>         return None, None
>
>     n, frame = g()
>     while frame:
>         print n
>         n, frame = frame.resume()
>
> Let's build something easier to use:
>
>     # Generator.py
>     import sys
>
>     class Generator:
>         def __init__(self):
>             self.frame = sys._getframe(1)
>             self.frame.suspend(self)
>
>         def suspend(self, value):
>             self.frame.suspend(value)
>
>         def end(self):
>             raise IndexError
>
>         def __getitem__(self, i):
>             # fake indices suck, need iterators
>             return self.frame.resume()
>
> Now let's try Guido's pi example now:
>
>     # Prints out the frist 100 digits of pi
>     from Generator import Generator
>
>     def pi():
>         g = Generator()
>         k, a, b, a1, b1 = 2L, 4L, 1L, 12L, 4L
>         while 1:
>             # Next approximation
>             p, q, k = k*k, 2L*k+1L, k+1L
>             a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
>             # Print common digits
>             d, d1 = a/b, a1/b1
>             while d == d1:
>                 g.suspend(int(d))
>                 a, a1 = 10L*(a%b), 10L*(a1%b1)
>                 d, d1 = a/b, a1/b1
>
>     def test():
>         pi_digits = pi()
>         for i in range(100):
>             print pi_digits[i],
>
>     if __name__ == "__main__":
>         test()
>
> Some tree traversals:
>
>     from types import TupleType
>     from Generator import Generator
>
>     # (A - B) + C * (E/F)
>     expr = ("+",
>              ("-", "A", "B"),
>              ("*", "C",
>                   ("/", "E", "F")))
>
>     def postorder(node):
>         g = Generator()
>         if isinstance(node, TupleType):
>             value, left, right = node
>             for child in postorder(left):
>                 g.suspend(child)
>             for child in postorder(right):
>                 g.suspend(child)
>             g.suspend(value)
>         else:
>             g.suspend(node)
>         g.end()
>
>     print "postorder:",
>     for node in postorder(expr):
>         print node,
>     print
>
> This prints:
>
>     postorder: A B - C E F / * +
>
> Cheers,
>
>   Neil
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
>





From fdrake at acm.org  Sun Mar 18 15:23:23 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sun, 18 Mar 2001 09:23:23 -0500 (EST)
Subject: [Python-Dev] Re: Followup on freezetools error
In-Reply-To: <200103180706.f2I768q17436@snark.thyrsus.com>
References: <200103180706.f2I768q17436@snark.thyrsus.com>
Message-ID: <15028.50395.414064.239096@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > Cancel previous complaint.  Pilot error.  I think I'm going to end up
 > writing some documentation for this puppy...

Eric,
  So how often would you like reminders?  ;-)
  I think a "howto" format document would be great; I'm sure we could
find a place for it in the standard documentation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Sun Mar 18 16:01:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 10:01:50 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: Your message of "Sun, 18 Mar 2001 00:26:45 +0100."
             <20010318002645.H29286@xs4all.nl> 
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>  
            <20010318002645.H29286@xs4all.nl> 
Message-ID: <200103181501.KAA22545@cj20424-a.reston1.va.home.com>

> What Paul means is that he's added a new file to his tree, and wants to send
> in a patch that includes that file. Unfortunately, CVS can't do that :P You
> have two choices:
> 
> - 'cvs add' the file, but don't commit. This is kinda lame since it requires
>  commit access, and it creates the administrativia for the file already. I
>  *think* that if you do this, only you can actually add the file (after the
>  patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
>  show the file (as all +'es, obviously) even though it will complain to
>  stderr about its ignorance about that specific file.

No, cvs diff still won't diff the file -- it says "new file".

> - Don't use cvs diff. Use real diff instead. Something like this:

Too much work to create a new tree.

What I do: I usually *know* what are the new files.  (If you don't,
consider getting a little more organized first :-).  Then do a regular
diff -c between /dev/null and each of the new files, and append that
to the CVS-generated diff.  Patch understands diffs between /dev/null
and a regular file and understands that this means to add the file.

(I have no idea what the rest of this thread is about.  Dinkytoy
attitude???  I played with tpy cars called dinky toys, but I don't see
the connection.  What SF FAQ are we talking about anyway?)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Sun Mar 18 17:22:38 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 11:22:38 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
	<LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
	<20010318002645.H29286@xs4all.nl>
Message-ID: <15028.57550.447075.226874@anthem.wooz.org>

>>>>> "TP" == Tim Peters <tim.one at home.com> writes:

    TP> I'm always amused that Unix users never allow the limitations
    TP> of their tools to convince them to do something obvious
    TP> instead.

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> - Don't use cvs diff. Use real diff instead. Something like
    TW> this:

    TW>   mv your tree asside, (can just mv your 'src' dir to
    TW> 'src.mypatch' or such) cvs update -d, make distclean in your
    TW> old tree, diff -crN --exclude=CVS src src.mypatch >
    TW> mypatch.diff

Why not try the "obvious" thing <wink>?

    % cvs diff -uN <rev-switches>

(Okay this also generates unified diffs, but I'm starting to find them
more readable than context diffs anyway.)

I seem to recall actually getting this to work effortlessly when I
generated the Mailman 2.0.3 patch (which contained the new file
README.secure_linux).

Yup, looking at the uploaded SF patch

    http://ftp1.sourceforge.net/mailman/mailman-2.0.2-2.0.3.diff

that file's in there, and diffed against /dev/null, so it's added by
`+' the whole file.

-Barry



From thomas at xs4all.net  Sun Mar 18 17:49:25 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 17:49:25 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <200103181501.KAA22545@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Mar 18, 2001 at 10:01:50AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <200103181501.KAA22545@cj20424-a.reston1.va.home.com>
Message-ID: <20010318174924.N27808@xs4all.nl>

On Sun, Mar 18, 2001 at 10:01:50AM -0500, Guido van Rossum wrote:
> > What Paul means is that he's added a new file to his tree, and wants to send
> > in a patch that includes that file. Unfortunately, CVS can't do that :P You
> > have two choices:
> > 
> > - 'cvs add' the file, but don't commit. This is kinda lame since it requires
> >  commit access, and it creates the administrativia for the file already. I
> >  *think* that if you do this, only you can actually add the file (after the
> >  patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
> >  show the file (as all +'es, obviously) even though it will complain to
> >  stderr about its ignorance about that specific file.

> No, cvs diff still won't diff the file -- it says "new file".

Hm, you're right. I'm sure I had it working, but it doesn't work now. Odd. I
guess Barry got hit by the same oddity (see other reply to my msg ;)

> (I have no idea what the rest of this thread is about.  Dinkytoy
> attitude???  I played with tpy cars called dinky toys, but I don't see
> the connection.  What SF FAQ are we talking about anyway?)

The thread started by Paul asking why his question wasn't in the FAQ :) As
for 'dinkytoy attitude': it's a great, wonderful toy, but you can't use it
for real. A bit harsh, I guess, but I've been hitting the CVS constraints
many times in the last two weeks. (Moving files, moving directories,
removing directories 'for real', moving between different repositories in
which some files/directories (or just their names) overlap, making diffs
with new files in them ;) etc.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Sun Mar 18 17:53:25 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 11:53:25 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sat, 17 Mar 2001 22:31:39 PST."
             <200103180631.BAA03321@panix3.panix.com> 
References: <200103180631.BAA03321@panix3.panix.com> 
Message-ID: <200103181653.LAA22789@cj20424-a.reston1.va.home.com>

> >> Remember that all bugfixes are available as patches off of SourceForge.
> > 
> > I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
> > true, it's very not true. A lot of the patches applied are either never
> > submitted to SF (because it's the 'obvious fix' by one of the commiters) or
> > are modified to some extent from thh SF patch proposed. (Often
> > formatting/code style, fairly frequently symbol renaming, and not too
> > infrequently changes in the logic for various reasons.)
> 
> I'm thinking one of us is confused.  CVS is hosted at SourceForge,
> right?  People can download specific parts of Python from SF?  And we're
> presuming there will be a specific fork that patches are checked in to?
> So in what way is my statement not true?

Ah...  Thomas clearly thought you meant the patch manager, and you
didn't make it too clear that's not what you meant.  Yes, they are of
course all available as diffs -- and notice how I use this fact in the
2.0 patches lists in the 2.0 wiki, e.g. on
http://www.python.org/cgi-bin/moinmoin/CriticalPatches.

> >>> ... that would make 10+ 'releases' of various form in those 6 months.
> >>> Ain't no-one[*] going to check them out for a decent spin, they'll just
> >>> wait for the final version.
> >> 
> >> That's why I'm making the beta cycle artificially long (I'd even vote
> >> for a two-month minimum).  It slows the release pace and -- given the
> >> usually high quality of Python betas -- it encourages people to try them
> >> out.  I believe that we *do* need patch betas for system testing.
> > 
> > But having a patch release once every 6 months negates the whole
> > purpose of patch releases :) If you are in need of a bugfix, you
> > don't want to wait three months before a bugfix release beta with
> > your specific bug fixed is going to be released, and you don't want
> > to wait two months more for the release to become final. (Note: we're
> > talking people who don't want to use the next feature release beta or
> > current CVS version, so they aren't likely to try a bugfix release
> > beta either.) Bugfix releases should come often-ish, compared to
> > feature releases. But maybe we can get the BDFL to slow the pace of
> > feature releases instead ? Is the 6-month speedway really appropriate
> > if we have a separate bugfix release track ?
> 
> Well, given that neither of us is arguing on the basis of actual
> experience with Python patch releases, there's no way we can prove one
> point of view as being better than the other.  Tell you what, though:
> take the job of Patch Czar, and I'll follow your lead.  I'll just
> reserve the right to say "I told you so".  ;-)

It seems I need to butt in here.  :-)

I like the model used by Tcl.  They have releases with a 6-12 month
release cycle, 8.0, 8.1, 8.2, 8.3, 8.4.  These have serious alpha and
beta cycles (three of each typically).  Once a release is out, the
issue occasional patch releases, e.g. 8.2.1, 8.2.2, 8.2.3; these are
about a month apart.  The latter bugfixes overlap with the early alpha
releases of the next major release.  I see no sign of beta cycles for
the patch releases.  The patch releases are *very* conservative in
what they add -- just bugfixes, about 5-15 per bugfix release.  They
seem to add the bugfixes to the patch branch as soon as they get them,
and they issue patch releases as soon as they can.

I like this model a lot.  Aahz, if you want to, you can consider this
a BDFL proclamation -- can you add this to your PEP?

> >>> I'm also for starting the maintenance branch right after the
> >>> real release, and start adding bugfixes to it right away, as
> >>> soon as they show up. Keeping up to date on bufixes to the head
> >>> branch is then as 'simple' as watching python-checkins. (Up
> >>> until the fact a whole subsystem gets rewritten, that is :)
> >>> People should still be able to submit bugfixes for the
> >>> maintenance branch specifically.
> > 
> >> That is *precisely* why my original proposal suggested that only
> >> the N-1 release get patch attention, to conserve effort.  It is
> >> also why I suggested that patch releases get hooked to feature
> >> releases.
> > 
> > There is no technical reason to do just N-1. You can branch of as
> > often as you want (in fact, branches never disappear, so if we
> > were building 3.5 ten years from now (and we would still be using
> > CVS <wink GregS>) we could apply a specific patch to the 2.0
> > maintenance branch and release 2.0.128, if need be.)
> 
> No technical reason, no.  It's just that N-1 is going to be similar
> enough to N, particularly for any given bugfix, that it should be
> "trivial" to keep the bugfixes in synch.  That's all.

I agree.  The Tcl folks never issue patch releases when they've issued
a new major release (in fact the patch releases seem to stop long
before they're ready to issue the next major release).  I realize that
we're way behind with 2.0.1 -- since this is the first time we're
doing this, that's OK for now, but in the future I like the Tcl
approach a lot!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Sun Mar 18 18:03:10 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 18:03:10 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <15028.57550.447075.226874@anthem.wooz.org>; from barry@digicool.com on Sun, Mar 18, 2001 at 11:22:38AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <15028.57550.447075.226874@anthem.wooz.org>
Message-ID: <20010318180309.P27808@xs4all.nl>

On Sun, Mar 18, 2001 at 11:22:38AM -0500, Barry A. Warsaw wrote:

> Why not try the "obvious" thing <wink>?

>     % cvs diff -uN <rev-switches>

That certainly doesn't work. 'cvs' just gives a '? Filename' line for that
file, then. I just figured out why the 'cvs add <file>; cvs diff -cN' trick
worked before: it works with CVS 1.11 (which is what's in Debian unstable),
but not with CVS 1.10.8 (which is what's in RH7.) But you really have to use
'cvs add' before doing the diff. (So I'll take back *some* of the dinkytoy
comment ;)

> I seem to recall actually getting this to work effortlessly when I
> generated the Mailman 2.0.3 patch (which contained the new file
> README.secure_linux).

Ah, but you had already added and commited that file. Paul wants to do it to
submit a patch to SF, so checking it in to do that is probably not what he
meant. ;-P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Sun Mar 18 18:07:18 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 18:07:18 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103181653.LAA22789@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Mar 18, 2001 at 11:53:25AM -0500
References: <200103180631.BAA03321@panix3.panix.com> <200103181653.LAA22789@cj20424-a.reston1.va.home.com>
Message-ID: <20010318180718.Q27808@xs4all.nl>

On Sun, Mar 18, 2001 at 11:53:25AM -0500, Guido van Rossum wrote:

> I like the Tcl approach a lot!

Me, too. I didn't know they did it like that, but it makes sense to me :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From barry at digicool.com  Sun Mar 18 18:18:31 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 12:18:31 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
	<LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
	<20010318002645.H29286@xs4all.nl>
	<200103181501.KAA22545@cj20424-a.reston1.va.home.com>
	<20010318174924.N27808@xs4all.nl>
Message-ID: <15028.60903.326987.679071@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> The thread started by Paul asking why his question wasn't in
    TW> the FAQ :) As for 'dinkytoy attitude': it's a great, wonderful
    TW> toy, but you can't use it for real. A bit harsh, I guess, but
    TW> I've been hitting the CVS constraints many times in the last
    TW> two weeks. (Moving files, moving directories, removing
    TW> directories 'for real', moving between different repositories
    TW> in which some files/directories (or just their names) overlap,
    TW> making diffs with new files in them ;) etc.)

Was it Greg Wilson who said at IPC8 that CVS was the worst tool that
everybody uses (or something like that)?

-Barry



From guido at digicool.com  Sun Mar 18 18:21:03 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 12:21:03 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: Your message of "Sun, 18 Mar 2001 17:49:25 +0100."
             <20010318174924.N27808@xs4all.nl> 
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <200103181501.KAA22545@cj20424-a.reston1.va.home.com>  
            <20010318174924.N27808@xs4all.nl> 
Message-ID: <200103181721.MAA23196@cj20424-a.reston1.va.home.com>

> > No, cvs diff still won't diff the file -- it says "new file".
> 
> Hm, you're right. I'm sure I had it working, but it doesn't work now. Odd. I
> guess Barry got hit by the same oddity (see other reply to my msg ;)

Barry posted the right solution: cvs diff -c -N.  The -N option treats
absent files as empty.  I'll use this in the future!

> > (I have no idea what the rest of this thread is about.  Dinkytoy
> > attitude???  I played with tpy cars called dinky toys, but I don't see
> > the connection.  What SF FAQ are we talking about anyway?)
> 
> The thread started by Paul asking why his question wasn't in the FAQ :) As
> for 'dinkytoy attitude': it's a great, wonderful toy, but you can't use it
> for real. A bit harsh, I guess, but I've been hitting the CVS constraints
> many times in the last two weeks. (Moving files, moving directories,
> removing directories 'for real', moving between different repositories in
> which some files/directories (or just their names) overlap, making diffs
> with new files in them ;) etc.)

Note that at least *some* of the constraints have to do with issues
inherent in version control.  And cvs diff -N works. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Sun Mar 18 18:23:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 12:23:35 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sun, 18 Mar 2001 18:07:18 +0100."
             <20010318180718.Q27808@xs4all.nl> 
References: <200103180631.BAA03321@panix3.panix.com> <200103181653.LAA22789@cj20424-a.reston1.va.home.com>  
            <20010318180718.Q27808@xs4all.nl> 
Message-ID: <200103181723.MAA23240@cj20424-a.reston1.va.home.com>

[me]
> > I like the Tcl approach a lot!

[Thomas]
> Me, too. I didn't know they did it like that, but it makes sense to me :)

Ok, you are hereby nominated to be the 2.0.1 patch Czar.

(You saw that coming, right? :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Sun Mar 18 18:28:44 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 12:28:44 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
	<LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
	<20010318002645.H29286@xs4all.nl>
	<15028.57550.447075.226874@anthem.wooz.org>
	<20010318180309.P27808@xs4all.nl>
Message-ID: <15028.61516.717449.55864@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    >> I seem to recall actually getting this to work effortlessly
    >> when I generated the Mailman 2.0.3 patch (which contained the
    >> new file README.secure_linux).

    TW> Ah, but you had already added and commited that file. Paul
    TW> wants to do it to submit a patch to SF, so checking it in to
    TW> do that is probably not what he meant. ;-P

Ah, you're right.  I'd missed Paul's original message.  Who am I to
argue that CVS doesn't suck? :)

-Barry



From paulp at ActiveState.com  Sun Mar 18 19:01:43 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 18 Mar 2001 10:01:43 -0800
Subject: [Python-Dev] Sourceforge FAQ
References: <LNBBLJKPBEHFEDALKOLCMEFOJGAA.tim.one@home.com>
Message-ID: <3AB4F807.4EAAD9FF@ActiveState.com>

Tim Peters wrote:
> 

> No:  as my signoff line implied, switch to Windows and tell Tim to deal with
> it.  Works for everyone except me <wink>!  I was just tweaking you.  For a
> patch on SF, it should be enough to just attach the new files and leave a
> comment saying where they belong.

Well, I'm going to bite just one more time. As near as I could see, a
patch on allows the submission of a single file. What I did to get
around this (seemed obvious at the time) was put the contents of the
file (because it was small) in the comment field and attach the "rest of
the patch."

Then I wanted to update the file but comments are added, not replace so
changes were quickly going to become nasty.

I'm just glad that the answer was sufficiently subtle that it generated
a new thread. I didn't miss anything obvious. :)
-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From martin at loewis.home.cs.tu-berlin.de  Sun Mar 18 19:39:48 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 18 Mar 2001 19:39:48 +0100
Subject: [Python-Dev] Sourceforge FAQ
Message-ID: <200103181839.f2IIdm101115@mira.informatik.hu-berlin.de>

> As near as I could see, a patch on allows the submission of a single
> file.

That was true with the old patch manager; the new tool can have
multiple artefacts per report. So I guess the proper procedure now is
to attach new files separately (or to build an archive of the new
files and to attach that separately). That requires no funny diffs
against /dev/null and works on VMS, ummh, Windows also.

Regards,
Martin



From aahz at panix.com  Sun Mar 18 20:42:30 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sun, 18 Mar 2001 11:42:30 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 18, 2001 11:53:25 AM
Message-ID: <200103181942.OAA08158@panix3.panix.com>

Guido:
>Aahz:
>>
>>    [to Thomas Wouters]
>>
>> I'm thinking one of us is confused.  CVS is hosted at SourceForge,
>> right?  People can download specific parts of Python from SF?  And we're
>> presuming there will be a specific fork that patches are checked in to?
>> So in what way is my statement not true?
> 
> Ah...  Thomas clearly thought you meant the patch manager, and you
> didn't make it too clear that's not what you meant.  Yes, they are of
> course all available as diffs -- and notice how I use this fact in the
> 2.0 patches lists in the 2.0 wiki, e.g. on
> http://www.python.org/cgi-bin/moinmoin/CriticalPatches.

Of course I didn't make it clear, because I have no clue what I'm
talking about.  ;-)  And actually, I was talking about simply
downloading complete replacements for specific Python source files.

But that seems to be irrelevent to our current path, so I'll shut up now.

>> Well, given that neither of us is arguing on the basis of actual
>> experience with Python patch releases, there's no way we can prove one
>> point of view as being better than the other.  Tell you what, though:
>> take the job of Patch Czar, and I'll follow your lead.  I'll just
>> reserve the right to say "I told you so".  ;-)
> 
> It seems I need to butt in here.  :-)
> 
> I like the model used by Tcl.  They have releases with a 6-12 month
> release cycle, 8.0, 8.1, 8.2, 8.3, 8.4.  These have serious alpha and
> beta cycles (three of each typically).  Once a release is out, the
> issue occasional patch releases, e.g. 8.2.1, 8.2.2, 8.2.3; these are
> about a month apart.  The latter bugfixes overlap with the early alpha
> releases of the next major release.  I see no sign of beta cycles for
> the patch releases.  The patch releases are *very* conservative in
> what they add -- just bugfixes, about 5-15 per bugfix release.  They
> seem to add the bugfixes to the patch branch as soon as they get them,
> and they issue patch releases as soon as they can.
> 
> I like this model a lot.  Aahz, if you want to, you can consider this
> a BDFL proclamation -- can you add this to your PEP?

BDFL proclamation received.  It'll take me a little while to rewrite
this into an internally consistent PEP.  It would be helpful if you
pre-announced (to c.l.py.announce) the official change in feature release
policy (the 6-12 month target instead of a 6 month target).

>>Thomas Wouters:
>>> There is no technical reason to do just N-1. You can branch of as
>>> often as you want (in fact, branches never disappear, so if we
>>> were building 3.5 ten years from now (and we would still be using
>>> CVS <wink GregS>) we could apply a specific patch to the 2.0
>>> maintenance branch and release 2.0.128, if need be.)
>> 
>> No technical reason, no.  It's just that N-1 is going to be similar
>> enough to N, particularly for any given bugfix, that it should be
>> "trivial" to keep the bugfixes in synch.  That's all.
> 
> I agree.  The Tcl folks never issue patch releases when they've issued
> a new major release (in fact the patch releases seem to stop long
> before they're ready to issue the next major release).  I realize that
> we're way behind with 2.0.1 -- since this is the first time we're
> doing this, that's OK for now, but in the future I like the Tcl
> approach a lot!

Okie-doke.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From tim_one at email.msn.com  Sun Mar 18 20:49:17 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 18 Mar 2001 14:49:17 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <3AB4F807.4EAAD9FF@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHOJGAA.tim_one@email.msn.com>

[Paul Prescod]
> Well, I'm going to bite just one more time. As near as I could see, a
> patch on allows the submission of a single file.

That *used* to be true.  Tons of stuff changed on SF recently, including the
ability to attach as many files to patches as you need.  Also to bug reports,
which previously didn't allow any file attachments.  These are all instances
of a Tracker now.  "Feature Requests" is a new Tracker.




From guido at digicool.com  Sun Mar 18 20:58:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 14:58:19 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sun, 18 Mar 2001 11:42:30 PST."
             <200103181942.OAA08158@panix3.panix.com> 
References: <200103181942.OAA08158@panix3.panix.com> 
Message-ID: <200103181958.OAA23418@cj20424-a.reston1.va.home.com>

> > I like this model a lot.  Aahz, if you want to, you can consider this
> > a BDFL proclamation -- can you add this to your PEP?
> 
> BDFL proclamation received.  It'll take me a little while to rewrite
> this into an internally consistent PEP.  It would be helpful if you
> pre-announced (to c.l.py.announce) the official change in feature release
> policy (the 6-12 month target instead of a 6 month target).

You're reading too much in it. :-)

I don't want to commit to a precise release interval anyway -- no two
releases are the same.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Sun Mar 18 21:12:57 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sun, 18 Mar 2001 12:12:57 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 18, 2001 02:58:19 PM
Message-ID: <200103182012.PAA04074@panix2.panix.com>

>> BDFL proclamation received.  It'll take me a little while to rewrite
>> this into an internally consistent PEP.  It would be helpful if you
>> pre-announced (to c.l.py.announce) the official change in feature release
>> policy (the 6-12 month target instead of a 6 month target).
> 
> You're reading too much in it. :-)

Mmmmm....  Probably.

> I don't want to commit to a precise release interval anyway -- no two
> releases are the same.

That's very good to hear.  Perhaps I'm alone in this perception, but it
has sounded to me as though there's a goal (if not a "precise" interval)
of a release every six months.  Here's a quote from you on c.l.py:

"Given our current pace of releases that should be about 6 months warning."

With your current posting frequency to c.l.py, such oracular statements
have some of the force of a Proclamation.  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From paulp at ActiveState.com  Sun Mar 18 22:12:45 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 18 Mar 2001 13:12:45 -0800
Subject: [Python-Dev] Sourceforge FAQ
References: <200103181839.f2IIdm101115@mira.informatik.hu-berlin.de>
Message-ID: <3AB524CD.67A0DEEA@ActiveState.com>

"Martin v. Loewis" wrote:
> 
> > As near as I could see, a patch on allows the submission of a single
> > file.
> 
> That was true with the old patch manager; the new tool can have
> multiple artefacts per report. 

The user interface really does not indicate that multiple files may be
attached. Do I just keep going back into the patch page, adding files?

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From guido at python.org  Sun Mar 18 23:43:27 2001
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Mar 2001 17:43:27 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
Message-ID: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>

[On c.l.py]
"Aahz Maruch" <aahz at panix.com> wrote in message
news:992tb4$qf5$1 at panix2.panix.com...
> [cc'd to Barry Warsaw in case he wants to comment]

(I happen to be skimming c.l.py this lazy Sunday afternoon :-)

> In article <3ab4f320 at nntp.server.uni-frankfurt.de>,
> Michael 'Mickey' Lauer  <mickey at Vanille.de> wrote:
> >
> >Hi. If I remember correctly PEP224 (the famous "attribute docstrings")
> >has only been postponed because Python 2.0 was in feature freeze
> >in August 2000. Will it be in 2.1 ? If not, what's the reason ? What
> >is needed for it to be included in 2.1 ?
>
> I believe it has been essentially superseded by PEP 232; I thought
> function attributes were going to be in 2.1, but I don't see any clear
> indication.

Actually, the attribute docstrings PEP is about a syntax for giving
non-function objects a docstring.  That's quite different than the function
attributes PEP.

The attribute docstring PEP didn't get in (and is unlikely to get in in its
current form) because I don't like the syntax much, *and* because the way to
look up the docstrings is weird and ugly: you'd have to use something like
instance.spam__doc__ or instance.__doc__spam (I forget which; they're both
weird and ugly).

I also expect that the doc-sig will be using the same syntax (string
literals in non-docstring positions) for a different purpose.  So I see
little chance for PEP 224.  Maybe I should just pronounce on this, and
declare the PEP rejected.

Unless Ping thinks this would be a really cool feature to be added to pydoc?
(Ping's going to change pydoc from importing the target module to scanning
its surce, I believe -- then he could add this feature without changing the
Python parser. :-)

--Guido van Rossum







From tim_one at email.msn.com  Sun Mar 18 23:48:38 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 18 Mar 2001 17:48:38 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <3AB277C7.28FE9B9B@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>

[M.-A. Lemburg]
> Looking around some more on the web, I found that the GNU MP (GMP)
> lib has switched from being GPLed to LGPLed,

Right.

> meaning that it can actually be used by non-GPLed code as long as
> the source code for the GMP remains publically accessible.

Ask Stallman <0.9 wink>.

> ...
> Since the GMP offers arbitrary precision numbers and also has
> a rational number implementation I wonder if we could use it
> in Python to support fractions and arbitrary precision
> floating points ?!

Note that Alex Martelli runs the "General Multiprecision Python" project on
SourceForge:

    http://gmpy.sourceforge.net/

He had a severe need for fast rational arithmetic in his Python programs, so
starting wrapping the full GMP out of necessity.  I'm sorry to say that I
haven't had time to even download his code.

WRT floating point, GMP supports arbitrary-precision floats too, but not in a
way you're going to like:  they're binary floats, and do not deliver
platform-independent results.  That last point is subtle, because the docs
say:

    The precision of a calculation is defined as follows:  Compute the
    requested operation exactly (with "infinite precision"), and truncate
    the result to the destination variable precision.

Leaving aside that truncation is a bad idea, that *sounds*
platform-independent.  The trap is that GMP supports no way to specify the
precision of a float result exactly:  you can ask for any precision you like,
but the implementation reserves the right to *use* any precision whatsoever
that's at least as large as what you asked for.  And, in practice, they do
use more than you asked for, depending on the word size of the machine.  This
is in line with GMP's overriding goal of being fast, rather than consistent
or elegant.

GMP's int and rational facilities could be used to build platform-independent
decimal fp, though.  However, this doesn't get away from the string<->float
issues I covered before:  if you're going to use binary ints internally (and
GMP does), decimal_string<->decimal_float is quadratic time in both
directions.

Note too that GMP is a lot of code, and difficult to port due to its "speed
first" goals.  Making Python *rely* on it is thus dubious (GMP on a Palm
Pilot?  maybe ...).

> Here's pointer to what the GNU MP has to offer:
>
>   http://www.math.columbia.edu/online/gmp.html

The official home page (according to Torbj?rn Granlund, GMP's dad) is

    http://www.swox.com/gmp/

> The existing mpz module only supports MP integers, but support
> for the other two types should only be a matter of hard work ;-).

Which Alex already did.  Now what <wink>?




From aleaxit at yahoo.com  Mon Mar 19 00:26:23 2001
From: aleaxit at yahoo.com (Alex Martelli)
Date: Mon, 19 Mar 2001 00:26:23 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>
Message-ID: <08e801c0b003$824f4f00$0300a8c0@arthur>

"Tim Peters" <tim_one at email.msn.com> writes:

> Note that Alex Martelli runs the "General Multiprecision Python" project
on
> SourceForge:
>
>     http://gmpy.sourceforge.net/
>
> He had a severe need for fast rational arithmetic in his Python programs,
so
> starting wrapping the full GMP out of necessity.  I'm sorry to say that I
> haven't had time to even download his code.

...and as for me, I haven't gotten around to prettying it up for beta
release yet (mostly the docs -- still just a plain textfile) as it's doing
what I need... but, I _will_ get a round tuit...


> WRT floating point, GMP supports arbitrary-precision floats too, but not
in a
> way you're going to like:  they're binary floats, and do not deliver
> platform-independent results.  That last point is subtle, because the docs
> say:
>
>     The precision of a calculation is defined as follows:  Compute the
>     requested operation exactly (with "infinite precision"), and truncate
>     the result to the destination variable precision.
>
> Leaving aside that truncation is a bad idea, that *sounds*
> platform-independent.  The trap is that GMP supports no way to specify the
> precision of a float result exactly:  you can ask for any precision you
like,

There's another free library that interoperates with GMP to remedy
this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
It's also LGPL.  I haven't looked much into it as it seems it's not been
ported to Windows yet (and that looks like quite a project) which is
the platform I'm currently using (and, rationals do what I need:-).

> > The existing mpz module only supports MP integers, but support
> > for the other two types should only be a matter of hard work ;-).
>
> Which Alex already did.  Now what <wink>?

Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
MPFR Python wrapper interoperating with GMPY, btw -- it lives at
http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
I can't run MPFR myself, as above explained).


Alex



_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com




From mal at lemburg.com  Mon Mar 19 01:07:17 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 01:07:17 +0100
Subject: [Python-Dev] Re: What has become of PEP224 (attribute docstrings) ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>
Message-ID: <3AB54DB5.52254EB6@lemburg.com>

Guido van Rossum wrote:
> ...
>
> The attribute docstring PEP didn't get in (and is unlikely to get in in its
> current form) because I don't like the syntax much, *and* because the way to
> look up the docstrings is weird and ugly: you'd have to use something like
> instance.spam__doc__ or instance.__doc__spam (I forget which; they're both
> weird and ugly).

It was the only way I could think of for having attribute doc-
strings behave in the same way as e.g. methods do, that is they
should respect the class hierarchy in much the same way. This is
obviously needed if you want to document not only the method interface
of a class, but also its attributes which could be accessible from
the outside.

I am not sure whether parsing the module would enable the same
sort of functionality unless Ping's code does it's own interpretation
of imports and base class lookups.

Note that the attribute doc string attribute names are really
secondary to the PEP. The main point is using the same syntax
for attribute doc-strings as we already use for classes, modules
and functions.

> I also expect that the doc-sig will be using the same syntax (string
> literals in non-docstring positions) for a different purpose. 

I haven't seen any mention of this on the doc-sig. Could you explain
what they intend to use them for ?

> So I see
> little chance for PEP 224.  Maybe I should just pronounce on this, and
> declare the PEP rejected.

Do you have an alternative approach which meets the design goals
of the PEP ?
 
> Unless Ping thinks this would be a really cool feature to be added to pydoc?
> (Ping's going to change pydoc from importing the target module to scanning
> its surce, I believe -- then he could add this feature without changing the
> Python parser. :-)

While Ping's code is way cool, I think we shouldn't forget that
other code will also want to do its own introspection, possibly
even at run-time which is certainly not possible by (re-)parsing the
source code every time.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From bogus@does.not.exist.com  Mon Mar 19 06:16:29 2001
From: bogus@does.not.exist.com ()
Date: Mon, 19 Mar 2001 02:16:29 -0300
Subject: [Python-Dev] MUDE SUA VIDA APARTIR DE AGORA
Message-ID: <E14es4u-00044G-00@mail.python.org>

    
ENTRE NESSA MAIS NOVA MANIA ONDE OS INTERNAUTAS 
GANHAM POR APENAS ESTAR CONECTADOS A INTERNET 
!!!! EU GANHO EM MEDIA CERCA DE 2 MIL REAIS MENSAL, 
ISSO MESMO !!! GANHE VOCE TAMBEM ... O QUE VOCE 
ESTA ESPERANDO ? 'E TOTALMENTE GRATIS, NAO CUSTA 
NADA TENTAR , VOCE PERDE APENAS 5 MINUTOS DE SEU 
TEMPO PARA SE CADASTRAR, POREM NO 1 MES VOCE JA 
VE O RESULTADO ( R$ 2.000,00 ) ISSO MESMO, ISSO E'+- O 
QUE EU TIRO MENSALMENTE, EXISTE PESSOAS QUE 
CONSEGUEM O DOBRO E ATE MESMO O TRIPLO !!!! BASTA 
ENTRAR EM UM DOS SITES ABAIXO PARA COMECAR A 
GANHAR -->

www.muitodinheiro.com
www.dinheiromole.com
www.granaajato.cjb.net


ENGLISH VERSION

$$$ MAKE A LOT OF MONEY $$$



Are you of those that thinks to win money in the internet it doesn't 
pass of a farce and what will you never receive anything? 

ENTER IN -

www.muitodinheiro.com
www.dinheiromole.com
www.granaajato.cjb.net



From tim.one at home.com  Mon Mar 19 06:26:27 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 19 Mar 2001 00:26:27 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <08e801c0b003$824f4f00$0300a8c0@arthur>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>

[Alex Martelli]
> ...
> There's another free library that interoperates with GMP to remedy
> this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
> It's also LGPL.  I haven't looked much into it as it seems it's not been
> ported to Windows yet (and that looks like quite a project) which is
> the platform I'm currently using (and, rationals do what I need:-).

Thanks for the pointer!  From a quick skim, good news & bad news (although
which is which may depend on POV):

+ The authors apparently believe their MPFR routines "should replace
  the MPF class in further releases of GMP".  Then somebody else will
  port them.

+ Allows exact specification of result precision (which will make the
  results 100% platform-independent, unlike GMP's).

+ Allows choice of IEEE 754 rounding modes (unlike GMP's truncation).

+ But is still binary floating-point.

Marc-Andre is especially interested in decimal fixed- and floating-point, and
even more specifically than that, of a flavor that will be efficient for
working with decimal types in databases (which I suspect-- but don't
know --means that I/O (conversion) costs are more important than computation
speed once converted).  GMP + MPFR don't really address the decimal part of
that.  Then again, MAL hasn't quantified any of his desires either <wink>; I
bet he'd be happier with a BCD-ish scheme.

> ...
> Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
> MPFR Python wrapper interoperating with GMPY, btw -- it lives at
> http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
> I can't run MPFR myself, as above explained).

OK, that amounts to ~200 lines of C code to wrap some of the MPFR functions
(exp, log, sqrt, sincos, agm, log2, pi, pow; many remain to be wrapped; and
they don't allow specifying precision yet).  So Pearu still has significant
work to do here, while MAL is wondering who in their right mind would want to
do *anything* with numbers except add them <wink>.

hmm-he's-got-a-good-point-there-ly y'rs  - tim




From dkwolfe at pacbell.net  Mon Mar 19 06:57:53 2001
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Sun, 18 Mar 2001 21:57:53 -0800
Subject: [Python-Dev] Makefile woos..
Message-ID: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>

While compiling the the 2.0b1 release on my shine new Mac OS X box 
today, I noticed that the fcntl module was breaking, so I went hunting 
for the cause...  (it was better than working on my taxes!)....

To make a long story short... I should have worked on my taxes ? at 
least ? 80% probability ? I understand those...

Ok, the reason that the fcntl module was breaking was that uname now 
reports Darwin 1.3 and it wasn't in the list... in the process of fixing 
that and testing to make sure that it was going to work correctly, I 
discovered that sys.platform was reporting that I was on a darwin1 
platform.... humm where did that come from...

It turns out that the MACHDEP is set correctly to Darwin1.3 when 
configuration queries the system... however, during the process of 
converting makefile.pre.in to makefile it passes thru the following SED 
script that starts around line 6284 of the configuration file:

sed 's/%@/@@/; s/@%/@@/; s/%g\$/@g/; /@g\$/s/[\\\\&%]/\\\\&/g;
  s/@@/%@/; s/@@/@%/; s/@g\$/%g/' > conftest.subs <<\\CEOF

which when applied to the Makefile.pre.in results in

MACHDEP = darwin1 instead of MACHDEP = darwin1.3

Question 1: I'm not geeky enough to understand why the '.3' get's 
removed.... is there a problem with the SED script? or did I overlook 
something?
Question 2: I noticed that all the other versions are 
<OS><MajorRevision> also - is this intentional? or is this just a result 
of the bug in the SED script

If someone can help me understand what's going on here, I'll be glad to 
submit the patch to fix the fcntl module and a few others on Mac OS X.

- Dan - who probably would have finished off his taxes if he hadn't 
opened this box....



From greg at cosc.canterbury.ac.nz  Mon Mar 19 04:02:55 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 19 Mar 2001 15:02:55 +1200 (NZST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB1ECEA.CD0FFC51@tismer.com>
Message-ID: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer at tismer.com>:

> But stopping the interpreter is a perfect unwind, and we
> can start again from anywhere.

Hmmm... Let me see if I have this correct.

You can switch from uthread A to uthread B as long
as the current depth of interpreter nesting is the
same as it was when B was last suspended. It doesn't
matter if the interpreter has returned and then
been called again, as long as it's at the same
level of nesting on the C stack.

Is that right? Is that the only restriction?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From uche.ogbuji at fourthought.com  Mon Mar 19 08:09:46 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Mon, 19 Mar 2001 00:09:46 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from "Tim Peters" <tim.one@home.com> 
   of "Sat, 17 Mar 2001 20:36:40 EST." <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com> 
Message-ID: <200103190709.AAA10053@localhost.localdomain>

> FYI, I pointed a correspondent to Neil's new generator patch (among other
> things), and got this back.  Not being a Web Guy at heart, I don't have a
> clue about XSLT (just enough to know that 4-letter acronyms are a webb
> abomination <wink>).
> 
> Note:  in earlier correspondence, the generator idea didn't seem to "click"
> until I called them "resumable functions" (as I often did in the past, but
> fell out of the habit).  People new to the concept often pick that up
> quicker, or even, as in this case, remember that they once rolled such a
> thing by hand out of prior necessity.
> 
> Anyway, possibly food for thought if XSLT means something to you ...

Quite interesting.  I brought up this *exact* point at the Stackless BOF at 
IPC9.  I mentioned that the immediate reason I was interested in Stackless was 
to supercharge the efficiency of 4XSLT.  I think that a stackless 4XSLT could 
pretty much annihilate the other processors in the field for performance.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From uche.ogbuji at fourthought.com  Mon Mar 19 08:15:07 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Mon, 19 Mar 2001 00:15:07 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from Paul Prescod <paulp@ActiveState.com> 
   of "Sat, 17 Mar 2001 17:50:39 PST." <3AB4146E.62AE3299@ActiveState.com> 
Message-ID: <200103190715.AAA10076@localhost.localdomain>

> I would call what you need for an efficient XSLT implementation "lazy
> lists." They are never infinite but you would rather not pre-compute
> them in advance. Often you use only the first item. Iterators probably
> would be a good implementation technique.

Well, if you don't want unmanageablecode, you could get the same benefit as 
stackless by iterating rather than recursing throuought an XSLT imlementation. 
 But why not then go farther?  Implement the whole think in raw assembler?

What Stackless would give is a way to keep good, readable execution structured 
without sacrificing performance.

XSLT interpreters are complex beasts, and I can't even imagining replacing 
4XSLT's xsl:call-template dispatch code to be purely iterative.  The result 
would be impenentrable.

But then again, this isn't exactly what you said.  I'm not sure why you think 
lazy lists would make all the difference.  Not so according to my benchmarking.

Aside: XPath node sets are one reason I've been interested in a speed and 
space-efficient set implementation for Python.  However, Guido, Tim are rather 
convincing that this is a fool's errand.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From MarkH at ActiveState.com  Mon Mar 19 10:40:24 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 20:40:24 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
Message-ID: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>

I understand the issue of "default Unicode encoding" is a loaded one,
however I believe with the Windows' file system we may be able to use a
default.

Windows provides 2 versions of many functions that accept "strings" - one
that uses "char *" arguments, and another using "wchar *" for Unicode.
Interestingly, the "char *" versions of function almost always support
"mbcs" encoded strings.

To make Python work nicely with the file system, we really should handle
Unicode characters somehow.  It is not too uncommon to find the "program
files" or the "user" directory have Unicode characters in non-english
version of Win2k.

The way I see it, to fix this we have 2 basic choices when a Unicode object
is passed as a filename:
* we call the Unicode versions of the CRTL.
* we auto-encode using the "mbcs" encoding, and still call the non-Unicode
versions of the CRTL.

The first option has a problem in that determining what Unicode support
Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
ascii versions of the functions means that the worst thing that can happen
is we get a regular file-system error if an mbcs encoded string is passed on
a non-Unicode platform.

Does anyone have any objections to this scheme or see any drawbacks in it?
If not, I'll knock up a patch...

Mark.




From mal at lemburg.com  Mon Mar 19 11:09:49 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 11:09:49 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>
Message-ID: <3AB5DAED.F7089741@lemburg.com>

Mark Hammond wrote:
> 
> I understand the issue of "default Unicode encoding" is a loaded one,
> however I believe with the Windows' file system we may be able to use a
> default.
> 
> Windows provides 2 versions of many functions that accept "strings" - one
> that uses "char *" arguments, and another using "wchar *" for Unicode.
> Interestingly, the "char *" versions of function almost always support
> "mbcs" encoded strings.
> 
> To make Python work nicely with the file system, we really should handle
> Unicode characters somehow.  It is not too uncommon to find the "program
> files" or the "user" directory have Unicode characters in non-english
> version of Win2k.
> 
> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.
> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.
> 
> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
> ascii versions of the functions means that the worst thing that can happen
> is we get a regular file-system error if an mbcs encoded string is passed on
> a non-Unicode platform.
> 
> Does anyone have any objections to this scheme or see any drawbacks in it?
> If not, I'll knock up a patch...

Hmm... the problem with MBCS is that it is not one encoding,
but can be many things. I don't know if this is an issue (can there
be more than one encoding per process ? is the encoding a user or
system setting ? does the CRT know which encoding to use/assume ?),
but the Unicode approach sure sounds a lot safer.

Also, what would os.listdir() return ? Unicode strings or 8-bit
strings ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From MarkH at ActiveState.com  Mon Mar 19 11:34:46 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 21:34:46 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <3AB5DAED.F7089741@lemburg.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPMEDHDGAA.MarkH@ActiveState.com>

> Hmm... the problem with MBCS is that it is not one encoding,
> but can be many things.

Yeah, but I think specifically with filenames this is OK.  We would be
translating from Unicode objects using MBCS in the knowledge that somewhere
in the Win32 maze they will be converted back to Unicode, using MBCS, to
access the Unicode based filesystem.

At the moment, you just get an exception - the dreaded "ASCII encoding
error: ordinal not in range(128)" :)

I don't see the harm - we are making no assumptions about the user's data,
just about the platform.  Note that I never want to assume a string object
is in a particular encoding - just assume that the CRTL file functions can
handle a particular encoding for their "filename" parameter.  I don't want
to handle Unicode objects in any "data" params, just the "filename".

Mark.




From MarkH at ActiveState.com  Mon Mar 19 11:53:01 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 21:53:01 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <3AB5DAED.F7089741@lemburg.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>

Sorry, I notice I didn't answer your specific question:

> Also, what would os.listdir() return ? Unicode strings or 8-bit
> strings ?

This would not change.

This is what my testing shows:

* I can switch to a German locale, and create a file using the keystrokes
"`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
last characters.

* os.listdir() returns '\xe0test\xf2' for this file.

* That same string can be passed to "open" etc to open the file.

* The only way to get that string to a Unicode object is to use the
encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
least it has a hope of handling non-latin characters :)

So - assume I am passed a Unicode object that represents this filename.  At
the moment we simply throw that exception if we pass that Unicode object to
open().  I am proposing that "mbcs" be used in this case instead of the
default "ascii"

If nothing else, my idea could be considered a "short-term" solution.  If
ever it is found to be a problem, we can simply move to the unicode APIs,
and nothing would break - just possibly more things _would_ work :)

Mark.




From mal at lemburg.com  Mon Mar 19 12:17:18 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:17:18 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
Message-ID: <3AB5EABE.CE4C5760@lemburg.com>

Mark Hammond wrote:
> 
> Sorry, I notice I didn't answer your specific question:
> 
> > Also, what would os.listdir() return ? Unicode strings or 8-bit
> > strings ?
> 
> This would not change.
> 
> This is what my testing shows:
> 
> * I can switch to a German locale, and create a file using the keystrokes
> "`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
> last characters.
> 
> * os.listdir() returns '\xe0test\xf2' for this file.
> 
> * That same string can be passed to "open" etc to open the file.
> 
> * The only way to get that string to a Unicode object is to use the
> encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
> least it has a hope of handling non-latin characters :)
> 
> So - assume I am passed a Unicode object that represents this filename.  At
> the moment we simply throw that exception if we pass that Unicode object to
> open().  I am proposing that "mbcs" be used in this case instead of the
> default "ascii"
> 
> If nothing else, my idea could be considered a "short-term" solution.  If
> ever it is found to be a problem, we can simply move to the unicode APIs,
> and nothing would break - just possibly more things _would_ work :)

Sounds like a good idea. We'd only have to assure that whatever
os.listdir() returns can actually be used to open the file, but that
seems to be the case... at least for Latin-1 chars (I wonder how
well this behaves with Japanese chars).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Mar 19 12:34:30 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:34:30 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>
Message-ID: <3AB5EEC6.F5D6FE3B@lemburg.com>

Tim Peters wrote:
> 
> [Alex Martelli]
> > ...
> > There's another free library that interoperates with GMP to remedy
> > this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
> > It's also LGPL.  I haven't looked much into it as it seems it's not been
> > ported to Windows yet (and that looks like quite a project) which is
> > the platform I'm currently using (and, rationals do what I need:-).
> 
> Thanks for the pointer!  From a quick skim, good news & bad news (although
> which is which may depend on POV):
> 
> + The authors apparently believe their MPFR routines "should replace
>   the MPF class in further releases of GMP".  Then somebody else will
>   port them.

...or simply install both packages...
 
> + Allows exact specification of result precision (which will make the
>   results 100% platform-independent, unlike GMP's).

This is a Good Thing :)
 
> + Allows choice of IEEE 754 rounding modes (unlike GMP's truncation).
> 
> + But is still binary floating-point.

:-(
 
> Marc-Andre is especially interested in decimal fixed- and floating-point, and
> even more specifically than that, of a flavor that will be efficient for
> working with decimal types in databases (which I suspect-- but don't
> know --means that I/O (conversion) costs are more important than computation
> speed once converted).  GMP + MPFR don't really address the decimal part of
> that.  Then again, MAL hasn't quantified any of his desires either <wink>; I
> bet he'd be happier with a BCD-ish scheme.

The ideal solution for my needs would be an implementation which
allows:

* fast construction of decimals using string input
* fast decimal string output
* good interaction with the existing Python numeric types

BCD-style or simple decimal string style implementations serve
these requirements best, but GMP or MPFR 
 
> > ...
> > Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
> > MPFR Python wrapper interoperating with GMPY, btw -- it lives at
> > http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
> > I can't run MPFR myself, as above explained).
> 
> OK, that amounts to ~200 lines of C code to wrap some of the MPFR functions
> (exp, log, sqrt, sincos, agm, log2, pi, pow; many remain to be wrapped; and
> they don't allow specifying precision yet).  So Pearu still has significant
> work to do here, while MAL is wondering who in their right mind would want to
> do *anything* with numbers except add them <wink>.

Right: as long as there is a possibility to convert these decimals to 
Python floats or integers (or longs) I don't really care ;)

Seriously, I think that the GMP lib + MPFR lib provide a very
good basis to do work with numbers on Unix. Unfortunately, they
don't look very portable (given all that assembler code in there
and the very Unix-centric build system).

Perhaps we'd need a higher level interface to all of this which
can then take GMP or some home-grown "port" of the Python long
implementation to base-10 as backend to do the actual work.

It would have to provide these types:
 Integer - arbitrary precision integers
 Rational - dito for rational numbers
 Float - dito for floating point numbers

Integration with Python is easy given the new coercion mechanism
at C level. The problem I see is how to define coercion order, i.e.
Integer + Rational should produce a Rational, but what about
Rational + Float or Float + Python float or Integer + Python float ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Mar 19 12:38:31 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:38:31 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>
Message-ID: <3AB5EFB7.2E2AAED0@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Looking around some more on the web, I found that the GNU MP (GMP)
> > lib has switched from being GPLed to LGPLed,
> 
> Right.
> 
> > meaning that it can actually be used by non-GPLed code as long as
> > the source code for the GMP remains publically accessible.
> 
> Ask Stallman <0.9 wink>.
> 
> > ...
> > Since the GMP offers arbitrary precision numbers and also has
> > a rational number implementation I wonder if we could use it
> > in Python to support fractions and arbitrary precision
> > floating points ?!
> 
> Note that Alex Martelli runs the "General Multiprecision Python" project on
> SourceForge:
> 
>     http://gmpy.sourceforge.net/
> 
> He had a severe need for fast rational arithmetic in his Python programs, so
> starting wrapping the full GMP out of necessity.

I found that link after hacking away at yet another GMP
wrapper for three hours Friday night... turned out to be a nice
proof of concept, but also showed some issues with respect to
coercion (see my other reply).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From gherman at darwin.in-berlin.de  Mon Mar 19 12:57:49 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 12:57:49 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
Message-ID: <3AB5F43D.E33B188D@darwin.in-berlin.de>

I wrote on comp.lang.python today:
> 
> is there a simple way (or any way at all) to find out for 
> any given hard disk how much free space is left on that
> device? I looked into the os module, but either not hard
> enough or there is no such function. Of course, the ideal
> solution would be platform-independant, too... :)

Is there any good reason for not having a cross-platform
solution to this? I'm certainly not the first to ask for
such a function and it certainly exists for all platforms,
doesn't it?

Unfortunately, OS problems like that make it rather impossi-
ble to write truly cross-platform applications in Python, 
even if it is touted to be exactly that.

I know that OS differ in the services they provide, but in
this case it seems to me that each one *must* have such a 
function, so I don't understand why it's not there...

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From thomas at xs4all.net  Mon Mar 19 13:07:13 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:07:13 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F43D.E33B188D@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 12:57:49PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de>
Message-ID: <20010319130713.M29286@xs4all.nl>

On Mon, Mar 19, 2001 at 12:57:49PM +0100, Dinu Gherman wrote:
> I wrote on comp.lang.python today:
> > is there a simple way (or any way at all) to find out for 
> > any given hard disk how much free space is left on that
> > device? I looked into the os module, but either not hard
> > enough or there is no such function. Of course, the ideal
> > solution would be platform-independant, too... :)

> Is there any good reason for not having a cross-platform
> solution to this? I'm certainly not the first to ask for
> such a function and it certainly exists for all platforms,
> doesn't it?

I think the main reason such a function does not exist is that no-one wrote
it. If you can write a portable function, or fake one by making different
implementations on different platforms, please contribute ;) Step one is
making an inventory of the available functions, though, so you know how
large an intersection you have to work with. The fact that you have to start
that study is probably the #1 reason no-one's done it yet :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nhodgson at bigpond.net.au  Mon Mar 19 13:06:40 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Mon, 19 Mar 2001 23:06:40 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>
Message-ID: <09c001c0b06d$0f359eb0$8119fea9@neil>

Mark Hammond:

> To make Python work nicely with the file system, we really
> should handle Unicode characters somehow.  It is not too
> uncommon to find the "program files" or the "user" directory
> have Unicode characters in non-english version of Win2k.

   The "program files" and "user" directory should still have names
representable in the normal locale used by the user so they are able to
access them by using their standard encoding in a Python narrow character
string to the open function.

> The way I see it, to fix this we have 2 basic choices when a Unicode
object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.

   This is by far the better approach IMO as it is more general and will
work for people who switch locales or who want to access files created by
others using other locales. Although you can always use the horrid mangled
"*~1" names.

> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.

   This will improve things but to a lesser extent than the above. May be
the best possible on 95.

> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.

    None of the *W file calls are listed as supported by 95 although Unicode
file names can certainly be used on FAT partitions.

> * I can switch to a German locale, and create a file using the
> keystrokes "`atest`o".  The "`" is the dead-char so I get an
> umlaut over the first and last characters.

   Its more fun playing with a non-roman locale, and one that doesn't fit in
the normal Windows code page for this sort of problem. Russian is reasonably
readable for us English speakers.

M.-A. Lemburg:
> I don't know if this is an issue (can there
> be more than one encoding per process ?

   There is an input locale and keyboard layout per thread.

> is the encoding a user or system setting ?

   There are system defaults and a menu through which you can change the
locale whenever you want.

> Also, what would os.listdir() return ? Unicode strings or 8-bit
> strings ?

   There is the Windows approach of having an os.listdirW() ;) .

   Neil






From thomas at xs4all.net  Mon Mar 19 13:13:26 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:13:26 +0100
Subject: [Python-Dev] Makefile woos..
In-Reply-To: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>; from dkwolfe@pacbell.net on Sun, Mar 18, 2001 at 09:57:53PM -0800
References: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>
Message-ID: <20010319131325.N29286@xs4all.nl>

On Sun, Mar 18, 2001 at 09:57:53PM -0800, Dan Wolfe wrote:

> Question 1: I'm not geeky enough to understand why the '.3' get's 
> removed.... is there a problem with the SED script? or did I overlook 
> something?
> Question 2: I noticed that all the other versions are 
> <OS><MajorRevision> also - is this intentional? or is this just a result 
> of the bug in the SED script

I believe it's intentional. I'm pretty sure it'll break stuff if it's
changed, in any case. It relies on the convention that the OS release
numbers actually mean something: nothing serious changes when the minor
version number is upped, so there is no need to have a separate architecture
directory for it.

> If someone can help me understand what's going on here, I'll be glad to 
> submit the patch to fix the fcntl module and a few others on Mac OS X.

Are you sure the 'darwin1' arch name is really the problem ? As long as you
have that directory, which should be filled by 'make Lib/plat-darwin1' and
by 'make install' (but not by 'make test', unfortunately) it shouldn't
matter.

(So my guess is: you're doing configure, make, make test, and the
plat-darwin1 directory isn't made then, so tests that rely (indirectly) on
it will fail. Try using 'make plat-darwin1' before 'make test'.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gherman at darwin.in-berlin.de  Mon Mar 19 13:21:44 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 13:21:44 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl>
Message-ID: <3AB5F9D8.74F0B55F@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> I think the main reason such a function does not exist is that no-one wrote
> it. If you can write a portable function, or fake one by making different
> implementations on different platforms, please contribute ;) Step one is
> making an inventory of the available functions, though, so you know how
> large an intersection you have to work with. The fact that you have to start
> that study is probably the #1 reason no-one's done it yet :)

Well, this is the usual "If you need it, do it yourself!"
answer, that bites the one who dares to speak up for all
those hundreds who don't... isn't it?

Rather than asking one non-expert in N-1 +/- 1 operating
systems to implement it, why not ask N experts in imple-
menting Python on 1 platform to do the job? (Notice the
potential for parallelism?! :)

Uhmm, seriously, does it really take 10 years for such an 
issue to creep up high enough on the priority ladder of 
Python-Labs? 

In any case it doesn't sound like a Python 3000 feature to 
me, or maybe it should?

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From mal at lemburg.com  Mon Mar 19 13:34:45 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 13:34:45 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <3AB5FCE5.92A133AB@lemburg.com>

Dinu Gherman wrote:
> 
> Thomas Wouters wrote:
> >
> > I think the main reason such a function does not exist is that no-one wrote
> > it. If you can write a portable function, or fake one by making different
> > implementations on different platforms, please contribute ;) Step one is
> > making an inventory of the available functions, though, so you know how
> > large an intersection you have to work with. The fact that you have to start
> > that study is probably the #1 reason no-one's done it yet :)
> 
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?
> 
> Rather than asking one non-expert in N-1 +/- 1 operating
> systems to implement it, why not ask N experts in imple-
> menting Python on 1 platform to do the job? (Notice the
> potential for parallelism?! :)

I think the problem with this one really is the differences
in OS designs, e.g. on Windows you have the concept of drive
letters where on Unix you have mounted file systems. Then there
also is the concept of disk space quota per user which would
have to be considered too.

Also, calculating the available disk space may return false
results (e.g. for Samba shares).

Perhaps what we really need is some kind of probing function
which tests whether a certain amount of disk space would be
available ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Mon Mar 19 13:43:23 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:43:23 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F9D8.74F0B55F@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 01:21:44PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <20010319134323.W27808@xs4all.nl>

On Mon, Mar 19, 2001 at 01:21:44PM +0100, Dinu Gherman wrote:
> Thomas Wouters wrote:
> > 
> > I think the main reason such a function does not exist is that no-one wrote
> > it. If you can write a portable function, or fake one by making different
> > implementations on different platforms, please contribute ;) Step one is
> > making an inventory of the available functions, though, so you know how
> > large an intersection you have to work with. The fact that you have to start
> > that study is probably the #1 reason no-one's done it yet :)
> 
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?
> 
> Rather than asking one non-expert in N-1 +/- 1 operating
> systems to implement it, why not ask N experts in imple-
> menting Python on 1 platform to do the job? (Notice the
> potential for parallelism?! :)
> 
> Uhmm, seriously, does it really take 10 years for such an 
> issue to creep up high enough on the priority ladder of 
> Python-Labs? 

> In any case it doesn't sound like a Python 3000 feature to 
> me, or maybe it should?

Nope. But you seem to misunderstand the idea behind Python development (and
most of open-source development.) PythonLabs has a *lot* of stuff they have
to do, and you cannot expect them to do everything. Truth is, this is not
likely to be done by Pythonlabs, and it will never be done unless someone
does it. It might sound harsh and unfriendly, but it's just a fact. It
doesn't mean *you* have to do it, but that *someone* has to do it. Feel free
to find someone to do it :)

As for the parallelism: that means getting even more people to volunteer for
the task. And the person(s) doing it still have to figure out the common
denominators in 'get me free disk space info'.

And the fact that it's *been* 10 years shows that noone cares enough about
the free disk space issue to actually get people to code it. 10 years filled
with a fair share of C programmers starting to use Python, so plenty of
those people could've done it :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Mon Mar 19 13:57:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 07:57:09 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Mon, 19 Mar 2001 00:26:27 EST."
             <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com> 
Message-ID: <200103191257.HAA25649@cj20424-a.reston1.va.home.com>

Is there any point still copying this thread to both
python-dev at python.org and python-numerics at lists.sourceforge.net?

It's best to move it to the latter, I "pronounce". :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From gherman at darwin.in-berlin.de  Mon Mar 19 13:58:48 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 13:58:48 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl>
Message-ID: <3AB60288.2915DF32@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> Nope. But you seem to misunderstand the idea behind Python development (and
> most of open-source development.) 

Not sure what makes you think that, but anyway.

> PythonLabs has a *lot* of stuff they have
> to do, and you cannot expect them to do everything. Truth is, this is not
> likely to be done by Pythonlabs, and it will never be done unless someone
> does it.

Apparently, I agree, I know less about what makes truth here. 
What is probably valid is that having much to do is true for 
everybody and not much of an argument, is it?

> As for the parallelism: that means getting even more people to volunteer for
> the task. And the person(s) doing it still have to figure out the common
> denominators in 'get me free disk space info'.

I'm afraid this is like argueing in circles.

> And the fact that it's *been* 10 years shows that noone cares enough about
> the free disk space issue to actually get people to code it. 10 years filled
> with a fair share of C programmers starting to use Python, so plenty of
> those people could've done it :)

I'm afraid, again, but the impression you have of nobody in ten
years asking for this function is just that, an impression, 
unless *somebody* prooves the contrary. 

All I can say is that I'm writing an app that I want to be 
cross-platform and that Python does not allow it to be just 
that, while Google gives you 17400 hits if you look for 
"python cross-platform". Now, this is also some kind of 
*truth* if only one of a mismatch between reality and wish-
ful thinking...

Regards,

Dinu



From guido at digicool.com  Mon Mar 19 14:00:44 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 08:00:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: Your message of "Mon, 19 Mar 2001 15:02:55 +1200."
             <200103190302.PAA06055@s454.cosc.canterbury.ac.nz> 
References: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103191300.IAA25681@cj20424-a.reston1.va.home.com>

> Christian Tismer <tismer at tismer.com>:
> 
> > But stopping the interpreter is a perfect unwind, and we
> > can start again from anywhere.
> 
> Hmmm... Let me see if I have this correct.
> 
> You can switch from uthread A to uthread B as long
> as the current depth of interpreter nesting is the
> same as it was when B was last suspended. It doesn't
> matter if the interpreter has returned and then
> been called again, as long as it's at the same
> level of nesting on the C stack.
> 
> Is that right? Is that the only restriction?

I doubt it.  To me (without a lot of context, but knowing ceval.c :-)
it would make more sense if the requirement was that there were no C
stack frames involved in B -- only Python frames.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Mon Mar 19 14:07:25 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 14:07:25 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <3AB5FCE5.92A133AB@lemburg.com> <3AB5FFB8.E138160A@darwin.in-berlin.de>
Message-ID: <3AB6048D.4E24AC4F@lemburg.com>

Dinu Gherman wrote:
> 
> "M.-A. Lemburg" wrote:
> >
> > I think the problem with this one really is the differences
> > in OS designs, e.g. on Windows you have the concept of drive
> > letters where on Unix you have mounted file systems. Then there
> > also is the concept of disk space quota per user which would
> > have to be considered too.
> 
> I'd be perfectly happy with something like this:
> 
>   import os
>   free = os.getfreespace('c:\\')          # on Win
>   free = os.getfreespace('/hd5')          # on Unix-like boxes
>   free = os.getfreespace('Mactintosh HD') # on Macs
>   free = os.getfreespace('ZIP-1')         # on Macs, Win, ...
> 
> etc. where the string passed is, a-priori, a name known
> by the OS for some permanent or removable drive. Network
> drives might be slightly more tricky, but probably not
> entirely impossible, I guess.

This sounds like a lot of different platform C APIs would need
to be wrapped first, e.g. quotactrl, getrlimit (already done)
+ a bunch of others since "get free space" is usually a file system
dependent call.

I guess we should take a look at how "df" does this on Unix
and maybe trick Mark Hammond into looking up the win32 API ;-)

> > Perhaps what we really need is some kind of probing function
> > which tests whether a certain amount of disk space would be
> > available ?!
> 
> Something like incrementally stuffing it with junk data until
> you get an exception, right? :)

Yep. Actually opening a file in record mode and then using
file.seek() should work on many platforms.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From fredrik at pythonware.com  Mon Mar 19 14:04:59 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 19 Mar 2001 14:04:59 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <029401c0b075$3c18e2e0$0900a8c0@SPIFF>

dinu wrote:
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?

fwiw, Python already supports this for real Unix platforms:

>>> os.statvfs("/")    
(8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)

here, the root disk holds 524288x512 bytes, with 348336x512
bytes free for the current user, and 365788x512 bytes available
for root.

(the statvfs module contains indices for accessing this "struct")

Implementing a small subset of statvfs for Windows wouldn't
be that hard (possibly returning None for fields that don't make
sense, or are too hard to figure out).

(and with win32all, I'm sure it can be done without any C code).

Cheers /F




From guido at digicool.com  Mon Mar 19 14:12:58 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 08:12:58 -0500
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: Your message of "Mon, 19 Mar 2001 21:53:01 +1100."
             <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com> 
References: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com> 
Message-ID: <200103191312.IAA25747@cj20424-a.reston1.va.home.com>

> > Also, what would os.listdir() return ? Unicode strings or 8-bit
> > strings ?
> 
> This would not change.
> 
> This is what my testing shows:
> 
> * I can switch to a German locale, and create a file using the keystrokes
> "`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
> last characters.

(Actually, grave accents, but I'm sure that to Aussie eyes, as to
Americans, they's all Greek. :-)

> * os.listdir() returns '\xe0test\xf2' for this file.

I don't understand.  This is a Latin-1 string.  Can you explain again
how the MBCS encoding encodes characters outside the Latin-1 range?

> * That same string can be passed to "open" etc to open the file.
> 
> * The only way to get that string to a Unicode object is to use the
> encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
> least it has a hope of handling non-latin characters :)
> 
> So - assume I am passed a Unicode object that represents this filename.  At
> the moment we simply throw that exception if we pass that Unicode object to
> open().  I am proposing that "mbcs" be used in this case instead of the
> default "ascii"
> 
> If nothing else, my idea could be considered a "short-term" solution.  If
> ever it is found to be a problem, we can simply move to the unicode APIs,
> and nothing would break - just possibly more things _would_ work :)

I have one more question.  The plan looks decent, but I don't know the
scope.  Which calls do you plan to fix?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Mon Mar 19 14:18:34 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 14:18:34 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB60288.2915DF32@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 01:58:48PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl> <3AB60288.2915DF32@darwin.in-berlin.de>
Message-ID: <20010319141834.X27808@xs4all.nl>

On Mon, Mar 19, 2001 at 01:58:48PM +0100, Dinu Gherman wrote:

> All I can say is that I'm writing an app that I want to be 
> cross-platform and that Python does not allow it to be just 
> that, while Google gives you 17400 hits if you look for 
> "python cross-platform". Now, this is also some kind of 
> *truth* if only one of a mismatch between reality and wish-
> ful thinking...

I'm sure I agree, but I don't see the value in dropping everything to write
a function so Python can be that much more cross-platform. (That's just me,
though.) Python wouldn't *be* as cross-platform as it is now if not for a
group of people who weren't satisfied with it, and improved on it. And a lot
of those people were not Guido or even of the current PythonLabs team.

I've never really believed in the 'true cross-platform nature' of Python,
mostly because I know it can't *really* be true. Most of my scripts are not
portably to non-UNIX platforms, due to the use of sockets, pipes, and
hardcoded filepaths (/usr/...). Even if I did, I can hardly agree that
because there is no portable way (if any at all) to find out howmany
diskspace is free, it isn't cross-platform. Just *because* it lacks that
function makes it more cross-platform: platforms might not have the concept
of 'free space' :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gherman at darwin.in-berlin.de  Mon Mar 19 14:23:51 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 14:23:51 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl> <3AB60288.2915DF32@darwin.in-berlin.de> <20010319141834.X27808@xs4all.nl>
Message-ID: <3AB60867.3D2A9DF@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> I've never really believed in the 'true cross-platform nature' of Python,
> mostly because I know it can't *really* be true. Most of my scripts are not
> portably to non-UNIX platforms, due to the use of sockets, pipes, and
> hardcoded filepaths (/usr/...). Even if I did, I can hardly agree that
> because there is no portable way (if any at all) to find out howmany
> diskspace is free, it isn't cross-platform. Just *because* it lacks that
> function makes it more cross-platform: platforms might not have the concept
> of 'free space' :)

Hmm, that means we better strip the standard library off
most of its modules (why not all?), because the less 
content there is, the more cross-platform it will be, 
right?

Well, if the concept is not there, simply throw a neat 
ConceptException! ;-)

Dinu



From gherman at darwin.in-berlin.de  Mon Mar 19 14:32:17 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 14:32:17 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
Message-ID: <3AB60A61.A4BB2768@darwin.in-berlin.de>

Fredrik Lundh wrote:
> 
> fwiw, Python already supports this for real Unix platforms:
> 
> >>> os.statvfs("/")
> (8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)
> 
> here, the root disk holds 524288x512 bytes, with 348336x512
> bytes free for the current user, and 365788x512 bytes available
> for root.
> 
> (the statvfs module contains indices for accessing this "struct")
> 
> Implementing a small subset of statvfs for Windows wouldn't
> be that hard (possibly returning None for fields that don't make
> sense, or are too hard to figure out).
> 
> (and with win32all, I'm sure it can be done without any C code).
> 
> Cheers /F

Everything correct! 

I'm just trying to make the point that from a user perspective 
it would be more complete to have such a function in the os 
module (where it belongs), that would also work on Macs e.g., 
as well as more conveniant, because even when that existed in 
modules like win32api (where it does) and in one of the (many) 
mac* ones (which I don't know yet if it does) it would save 
you the if-statement on sys.platform.

It sounds silly to me if people now pushed into learning Py-
thon as a first programming language had to use such state-
ments to get along, but were given the 'gift' of 1/2 = 0.5
which we seem to spend an increasing amount of brain cycles
on...

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From Greg.Wilson at baltimore.com  Mon Mar 19 14:32:21 2001
From: Greg.Wilson at baltimore.com (Greg Wilson)
Date: Mon, 19 Mar 2001 08:32:21 -0500
Subject: [Python-Dev] BOOST Python library
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>

Might be of interest to people binding C++ to Python...

http://www.boost.org/libs/python/doc/index.html

Greg

By the way, http://mail.python.org/pipermail/python-list/
now seems to include archives for February 2005.  Is this
another "future" import?





From tismer at tismer.com  Mon Mar 19 14:46:19 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 14:46:19 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz> <200103191300.IAA25681@cj20424-a.reston1.va.home.com>
Message-ID: <3AB60DAB.D92D12BF@tismer.com>


Guido van Rossum wrote:
> 
> > Christian Tismer <tismer at tismer.com>:
> >
> > > But stopping the interpreter is a perfect unwind, and we
> > > can start again from anywhere.
> >
> > Hmmm... Let me see if I have this correct.
> >
> > You can switch from uthread A to uthread B as long
> > as the current depth of interpreter nesting is the
> > same as it was when B was last suspended. It doesn't
> > matter if the interpreter has returned and then
> > been called again, as long as it's at the same
> > level of nesting on the C stack.
> >
> > Is that right? Is that the only restriction?
> 
> I doubt it.  To me (without a lot of context, but knowing ceval.c :-)
> it would make more sense if the requirement was that there were no C
> stack frames involved in B -- only Python frames.

Right. And that is only a dynamic restriction. It does not
matter how and where frames were created, it is just impossible
to jump at a frame that is held by an interpreter on the C stack.
The key to circumvent this (and the advantage of uthreads) is
to not enforce a jump from a nested interpreter, but to initiate
that it will happen. That is, the scheduling interpreter
does the switch, not the nested one.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From fredrik at pythonware.com  Mon Mar 19 14:54:03 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 19 Mar 2001 14:54:03 +0100
Subject: [Python-Dev] BOOST Python library
References: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>
Message-ID: <02ba01c0b07c$0ff8c9d0$0900a8c0@SPIFF>

greg wrote:
> By the way, http://mail.python.org/pipermail/python-list/
> now seems to include archives for February 2005.  Is this
> another "future" import?

did you read the post?




From gmcm at hypernet.com  Mon Mar 19 15:27:04 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 19 Mar 2001 09:27:04 -0500
Subject: [Python-Dev] Function in os module for available disk space, why  not?
In-Reply-To: <3AB60A61.A4BB2768@darwin.in-berlin.de>
Message-ID: <3AB5D0E8.16418.990252B8@localhost>

Dinu Gherman wrote:

[disk free space...]
> I'm just trying to make the point that from a user perspective it
> would be more complete to have such a function in the os module
> (where it belongs), that would also work on Macs e.g., as well as
> more conveniant, because even when that existed in modules like
> win32api (where it does) and in one of the (many) mac* ones
> (which I don't know yet if it does) it would save you the
> if-statement on sys.platform.

Considering that:
 - it's not uncommon to map things into the filesystem's 
namespace for which "free space" is meaningless
 - for network mapped storage space it's quite likely you can't 
get a meaningful number
 - for compressed file systems the number will be inaccurate
 - even if you get an accurate answer, the space may not be 
there when you go to use it (so need try... except anyway)

I find it perfectly sensible that Python does not dignify this 
mess with an official function.

- Gordon



From guido at digicool.com  Mon Mar 19 15:58:29 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 09:58:29 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: Your message of "Mon, 19 Mar 2001 14:32:17 +0100."
             <3AB60A61.A4BB2768@darwin.in-berlin.de> 
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>  
            <3AB60A61.A4BB2768@darwin.in-berlin.de> 
Message-ID: <200103191458.JAA26035@cj20424-a.reston1.va.home.com>

> I'm just trying to make the point that from a user perspective 
> it would be more complete to have such a function in the os 
> module (where it belongs), that would also work on Macs e.g., 
> as well as more conveniant, because even when that existed in 
> modules like win32api (where it does) and in one of the (many) 
> mac* ones (which I don't know yet if it does) it would save 
> you the if-statement on sys.platform.

Yeah, yeah, yeah.  Whine, whine, whine.  As has been made abundantly
clear, doing this cross-platform requires a lot of detailed platform
knowledge.  We at PythonLabs don't have all the wisdom, and we often
rely on outsiders to help us out.  Until now, finding out how much
free space there is on a disk hasn't been requested much (in fact I
don't recall seeing a request for it before).  That's why it isn't
already there -- that plus the fact that traditionally on Unix this
isn't easy to find out (statvfs didn't exist when I wrote most of the
posix module).  I'm not against adding it, but I'm not particularly
motivated to add it myself because I have too much to do already (and
the same's true for all of us here at PythonLabs).

> It sounds silly to me if people now pushed into learning Py-
> thon as a first programming language had to use such state-
> ments to get along, but were given the 'gift' of 1/2 = 0.5
> which we seem to spend an increasing amount of brain cycles
> on...

I would hope that you agree with me though that the behavior of
numbers is a lot more fundamental to education than finding out
available disk space.  The latter is just a system call of use to a
small number of professionals.  The former has usability implications
for all Python users.

--Guido van Rossum (home page: http://www.python.org/~guido/)




From gherman at darwin.in-berlin.de  Mon Mar 19 16:32:51 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 16:32:51 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  
 not?
References: <3AB5D0E8.16418.990252B8@localhost>
Message-ID: <3AB626A3.CA4B6174@darwin.in-berlin.de>

Gordon McMillan wrote:
> 
> Considering that:
>  - it's not uncommon to map things into the filesystem's
>    namespace for which "free space" is meaningless

Unless I'm totally stupid, I see the concept of "free space" as
being tied to the *device*, not to anything being mapped to it 
or not.

>  - for network mapped storage space it's quite likely you can't
>    get a meaningful number

Fine, then let's play the exception blues...

>  - for compressed file systems the number will be inaccurate

Then why is the OS function call there...? And: nobody can *seri-
ously* expect an accurate figure of the remaining space for com-
pressed file systems, anyway, and I think nobody does! But there
will always be some number >= 0 of uncompressed available bytes 
left.

>  - even if you get an accurate answer, the space may not be
>    there when you go to use it (so need try... except anyway)

The same holds for open(path, 'w') - and still this function is 
considered useful, isn't it?!

> I find it perfectly sensible that Python does not dignify this
> mess with an official function.

Well, I have yet to see a good argument against this...

Regards,

Dinu



From mal at lemburg.com  Mon Mar 19 16:46:34 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 16:46:34 +0100
Subject: [Python-Dev] BOOST Python library
References: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>
Message-ID: <3AB629DA.52C72E57@lemburg.com>

Greg Wilson wrote:
> 
> Might be of interest to people binding C++ to Python...
> 
> http://www.boost.org/libs/python/doc/index.html

Could someone please add links to all the tools they mention
in their comparison to the c++-sig page (not even SWIG is mentioned
there).

  http://www.boost.org/libs/python/doc/comparisons.html

BTW, most SIG have long expired... I guess bumbing the year from
2000 to 2002 would help ;-)

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From tismer at tismer.com  Mon Mar 19 16:49:37 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 16:49:37 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com>
Message-ID: <3AB62A91.1DBE7F8B@tismer.com>


Neil Schemenauer wrote:
> 
> I've got a different implementation.  There are no new keywords
> and its simpler to wrap a high level interface around the low
> interface.
> 
>     http://arctrix.com/nas/python/generator2.diff
> 
> What the patch does:
> 
>     Split the big for loop and switch statement out of eval_code2
>     into PyEval_EvalFrame.
> 
>     Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
>     WHY_RETURN except that the frame value stack and the block stack
>     are not touched.  The frame is also marked resumable before
>     returning (f_stackbottom != NULL).
> 
>     Add two new methods to frame objects, suspend and resume.
>     suspend takes one argument which gets attached to the frame
>     (f_suspendvalue).  This tells ceval to suspend as soon as control
>     gets back to this frame.  resume, strangely enough, resumes a
>     suspended frame.  Execution continues at the point it was
>     suspended.  This is done by calling PyEval_EvalFrame on the frame
>     object.
> 
>     Make frame_dealloc clean up the stack and decref f_suspendvalue
>     if it exists.
> 
> There are probably still bugs and it slows down ceval too much
> but otherwise things are looking good.  Here are some examples
> (the're a little long and but illustrative).  Low level
> interface, similar to my last example:

I've had a closer look at your patch (without actually applying
and running it) and it looks good to me.
A possible bug may be in frame_resume, where you are doing
+       f->f_back = tstate->frame;
without taking care of the prior value of f_back.

There is a little problem with your approach, which I have
to mention: I believe, without further patching it will be
easy to crash Python.
By giving frames the suspend and resume methods, you are
opening frames to everybody in a way that allows to treat
them as kind of callable objects. This is the same problem
that Stackless had imposed.
By doing so, it might be possible to call any frame, also
if it is currently run by a nested interpreter.

I see two solutions to get out of this:

1) introduce a lock flag for frames which are currently
   executed by some interpreter on the C stack. This is
   what Stackless does currently.
   Maybe you can just use your new f_suspendvalue field.
   frame_resume must check that this value is not NULL
   on entry, and set it zero before resuming.
   See below for more.

2) Do not expose the resume and suspend methods to the
   Python user, and recode Generator.py as an extension
   module in C. This should prevent abuse of frames.

Proposal for a different interface:
I would change the interface of PyEval_EvalFrame
to accept a return value passed in, like Stackless
has its "passed_retval", and maybe another variable
that explicitly tells the kind of the frame call,
i.e. passing the desired why_code. This also would
make it easier to cope with the other needs of Stackless
later in a cleaner way.
Well, I see you are clearing the f_suspendvalue later.
Maybe just adding the why_code to the parameters
would do. f_suspendvalue can be used for different
things, it can also become the place to store a return
value, or a coroutine transfer parameter.

In the future, there will not obly be the suspend/resume
interface. Frames will be called for different reasons:
suspend  with a value  (generators)
return   with a value  (normal function calls)
transfer with a value  (coroutines)
transfer with no value (microthreads)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From moshez at zadka.site.co.il  Mon Mar 19 17:00:01 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 19 Mar 2001 18:00:01 +0200
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F43D.E33B188D@darwin.in-berlin.de>
References: <3AB5F43D.E33B188D@darwin.in-berlin.de>
Message-ID: <E14f24f-0004ny-00@darjeeling>

On Mon, 19 Mar 2001 12:57:49 +0100, Dinu Gherman <gherman at darwin.in-berlin.de> wrote:
> I wrote on comp.lang.python today:
> > 
> > is there a simple way (or any way at all) to find out for 
> > any given hard disk how much free space is left on that
> > device? I looked into the os module, but either not hard
> > enough or there is no such function. Of course, the ideal
> > solution would be platform-independant, too... :)
> 
> Is there any good reason for not having a cross-platform
> solution to this? I'm certainly not the first to ask for
> such a function and it certainly exists for all platforms,
> doesn't it?

No, it doesn't.
Specifically, the information is always unreliable, especially
when you start considering NFS mounted directories and things
like that.

> I know that OS differ in the services they provide, but in
> this case it seems to me that each one *must* have such a 
> function

This doesn't have a *meaning* in UNIX. (In the sense that I can
think of so many special cases, that having a half-working implementation
is worse then nothing)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From gherman at darwin.in-berlin.de  Mon Mar 19 17:06:27 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 17:06:27 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>  
	            <3AB60A61.A4BB2768@darwin.in-berlin.de> <200103191458.JAA26035@cj20424-a.reston1.va.home.com>
Message-ID: <3AB62E83.ACBDEB3@darwin.in-berlin.de>

Guido van Rossum wrote:
> 
> Yeah, yeah, yeah.  Whine, whine, whine. [...]
> I'm not against adding it, but I'm not particularly motivated 
> to add it myself [...]

Good! After doing a quick research on Google it turns out 
this function is also available on MacOS, as expected, named 
PBHGetVInfo(). See this page for details plus a sample Pascal 
function using it:

  http://developer.apple.com/techpubs/mac/Files/Files-96.html

I'm not sure what else is needed to use it, but at least it's
there and maybe somebody more of a Mac expert than I am could
help out here... I'm going to continue this on c.l.p. in the
original thread... Hey, maybe it is already available in one
of the many mac packages. Well, I'll start some digging...

> I would hope that you agree with me though that the behavior of
> numbers is a lot more fundamental to education than finding out
> available disk space.  The latter is just a system call of use 
> to a small number of professionals.  The former has usability 
> implications for all Python users.

I do agree, sort of, but it appears that often there is much 
more work being spent on fantastic new features, where im-
proving existing ones would also be very beneficial. For me
at least, there is considerable value in a system's consisten-
cy and completeness and not only in its number of features.

Thanks everybody (now that Guido has spoken we have to finish)! 
It was fun! :)

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From guido at digicool.com  Mon Mar 19 17:32:33 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 11:32:33 -0500
Subject: [Python-Dev] Python T-shirts
Message-ID: <200103191632.LAA26632@cj20424-a.reston1.va.home.com>

At the conference we handed out T-shirts with the slogan on the back
"Python: programming the way Guido indented it".  We've been asked if
there are any left.  Well, we gave them all away, but we're ordering
more.  You can get them for $10 + S+H.  Write to Melissa Light
<melissa at digicool.com>.  Be nice to her!

--Guido van Rossum (home page: http://www.python.org/~guido/)




From nas at arctrix.com  Mon Mar 19 17:45:35 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 08:45:35 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB62A91.1DBE7F8B@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 04:49:37PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com>
Message-ID: <20010319084534.A18938@glacier.fnational.com>

On Mon, Mar 19, 2001 at 04:49:37PM +0100, Christian Tismer wrote:
> A possible bug may be in frame_resume, where you are doing
> +       f->f_back = tstate->frame;
> without taking care of the prior value of f_back.

Good catch.  There is also a bug when f_suspendvalue is being set
(Py_XDECREF should be called first).

[Christian on disallowing resume on frame already running]
> 1) introduce a lock flag for frames which are currently
>    executed by some interpreter on the C stack. This is
>    what Stackless does currently.
>    Maybe you can just use your new f_suspendvalue field.
>    frame_resume must check that this value is not NULL
>    on entry, and set it zero before resuming.

Another good catch.  It would be easy to set f_stackbottom to
NULL at the top of PyEval_EvalFrame.  resume already checks this
to decide if the frame is resumable.

> 2) Do not expose the resume and suspend methods to the
>    Python user, and recode Generator.py as an extension
>    module in C. This should prevent abuse of frames.

I like the frame methods.  However, this may be a good idea since
Jython may implement things quite differently.

> Proposal for a different interface:
> I would change the interface of PyEval_EvalFrame
> to accept a return value passed in, like Stackless
> has its "passed_retval", and maybe another variable
> that explicitly tells the kind of the frame call,
> i.e. passing the desired why_code. This also would
> make it easier to cope with the other needs of Stackless
> later in a cleaner way.
> Well, I see you are clearing the f_suspendvalue later.
> Maybe just adding the why_code to the parameters
> would do. f_suspendvalue can be used for different
> things, it can also become the place to store a return
> value, or a coroutine transfer parameter.
> 
> In the future, there will not obly be the suspend/resume
> interface. Frames will be called for different reasons:
> suspend  with a value  (generators)
> return   with a value  (normal function calls)
> transfer with a value  (coroutines)
> transfer with no value (microthreads)

The interface needs some work and I'm happy to change it to
better accommodate stackless.  f_suspendvalue and f_stackbottom
are pretty ugly, IMO.  One unexpected benefit: with
PyEval_EvalFrame split out of eval_code2 the interpreter is 5%
faster on my machine.  I suspect the compiler has an easier time
optimizing the loop in the smaller function.

BTW, where is this stackless light patch I've been hearing about?
I would be interested to look at it.  Thanks for your comments.

  Neil



From tismer at tismer.com  Mon Mar 19 17:58:46 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 17:58:46 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com>
Message-ID: <3AB63AC6.4799C73@tismer.com>


Neil Schemenauer wrote:
...
> > 2) Do not expose the resume and suspend methods to the
> >    Python user, and recode Generator.py as an extension
> >    module in C. This should prevent abuse of frames.
> 
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

Maybe a good reason. Exposing frame methods is nice
to play with. Finally, you will want the hard coded
generators. The same thing is happening with Stackless
now. I have a different spelling for frames :-) but
they have to vanish now.

[immature pre-pre-pre-interface]
> The interface needs some work and I'm happy to change it to
> better accommodate stackless.  f_suspendvalue and f_stackbottom
> are pretty ugly, IMO.  One unexpected benefit: with
> PyEval_EvalFrame split out of eval_code2 the interpreter is 5%
> faster on my machine.  I suspect the compiler has an easier time
> optimizing the loop in the smaller function.

Really!? I thought you told about a speed loss?

> BTW, where is this stackless light patch I've been hearing about?
> I would be interested to look at it.  Thanks for your comments.

It does not exist at all. It is just an idea, and
were are looking for somebody who can implement it.
At the moment, we have a PEP (thanks to Gordon), but
there is no specification of StackLite.
I believe PEPs are a good idea.
In this special case, I'd recomment to try to write
a StackLite, and then write the PEP :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From mal at lemburg.com  Mon Mar 19 17:07:10 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 17:07:10 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
Message-ID: <3AB62EAE.FCFD7C9F@lemburg.com>

Fredrik Lundh wrote:
> 
> dinu wrote:
> > Well, this is the usual "If you need it, do it yourself!"
> > answer, that bites the one who dares to speak up for all
> > those hundreds who don't... isn't it?
> 
> fwiw, Python already supports this for real Unix platforms:
> 
> >>> os.statvfs("/")
> (8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)
> 
> here, the root disk holds 524288x512 bytes, with 348336x512
> bytes free for the current user, and 365788x512 bytes available
> for root.
> 
> (the statvfs module contains indices for accessing this "struct")
> 
> Implementing a small subset of statvfs for Windows wouldn't
> be that hard (possibly returning None for fields that don't make
> sense, or are too hard to figure out).
> 
> (and with win32all, I'm sure it can be done without any C code).

It seems that all we need is Jack to port this to the Mac
and we have a working API here :-)

Let's do it...

import sys,os

try:
    os.statvfs

except KeyError:
    # Win32 implementation...
    # Mac implementation...
    pass

else:
    import statvfs
    
    def freespace(path):
        """ freespace(path) -> integer
        Return the number of bytes available to the user on the file system
        pointed to by path."""
        s = os.statvfs(path)
        return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

if __name__=='__main__':
    path = sys.argv[1]
    print 'Free space on %s: %i kB (%i bytes)' % (path,
                                                  freespace(path) / 1024,
                                                  freespace(path))

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From pedroni at inf.ethz.ch  Mon Mar 19 18:08:41 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 19 Mar 2001 18:08:41 +0100 (MET)
Subject: [Python-Dev] Simple generators, round 2
Message-ID: <200103191708.SAA09258@core.inf.ethz.ch>

Hi.

> > 2) Do not expose the resume and suspend methods to the
> >    Python user, and recode Generator.py as an extension
> >    module in C. This should prevent abuse of frames.
> 
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

I should repeat this: (if we want to avoid threads for implementing
generators because for them that's really an overkill, especially
if those are used in tight loops): jython codebase have following 
limitations:

- suspensions point should be known at compilation time
 (we produce jvm bytecode, that should be instrumented
  to allow restart at a given point). The only other solution
  is to compile a method with a big switch that have a case
  for every python line, which is quite expensive.
  
- a suspension point can at most do a return, it cannot go up 
  more than a single frame even if it just want to discard them.
  Maybe there is a workaroung to this using exceptions, but they
  are expensive and again an overkill for a tight loop.

=> we can support  something like a supsend keyword. The rest is pain :-( .

regards.




From nas at arctrix.com  Mon Mar 19 18:21:59 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:21:59 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB63AC6.4799C73@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 05:58:46PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com>
Message-ID: <20010319092159.B19071@glacier.fnational.com>

[Neil]
> One unexpected benefit: with PyEval_EvalFrame split out of
> eval_code2 the interpreter is 5% faster on my machine.  I
> suspect the compiler has an easier time optimizing the loop in
> the smaller function.

[Christian]
> Really!? I thought you told about a speed loss?

You must be referring to an earlier post I made.  That was purely
speculation.  I didn't time things until the weekend.  Also, the
5% speedup is base on the refactoring of eval_code2 with the
added generator bits.  I wouldn't put much weight on the apparent
speedup either.  Its probably slower on other platforms.

  Neil



From tismer at tismer.com  Mon Mar 19 18:25:43 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 18:25:43 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com>
Message-ID: <3AB64117.8D3AEBED@tismer.com>


Neil Schemenauer wrote:
> 
> [Neil]
> > One unexpected benefit: with PyEval_EvalFrame split out of
> > eval_code2 the interpreter is 5% faster on my machine.  I
> > suspect the compiler has an easier time optimizing the loop in
> > the smaller function.
> 
> [Christian]
> > Really!? I thought you told about a speed loss?
> 
> You must be referring to an earlier post I made.  That was purely
> speculation.  I didn't time things until the weekend.  Also, the
> 5% speedup is base on the refactoring of eval_code2 with the
> added generator bits.  I wouldn't put much weight on the apparent
> speedup either.  Its probably slower on other platforms.

Nevermind. I believe this is going to be the best possible
efficient implementation of generators.
And I'm very confident that it will make it into the
core with ease and without the need for a PEP.

congrats - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From nas at arctrix.com  Mon Mar 19 18:27:33 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:27:33 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010319092159.B19071@glacier.fnational.com>; from nas@arctrix.com on Mon, Mar 19, 2001 at 09:21:59AM -0800
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com>
Message-ID: <20010319092733.C19071@glacier.fnational.com>

On Mon, Mar 19, 2001 at 09:21:59AM -0800, Neil Schemenauer wrote:
> Also, the 5% speedup is base on the refactoring of eval_code2
> with the added generator bits.

Ugh, that should say "based on the refactoring of eval_code2
WITHOUT the generator bits".

  engage-fingers-before-brain-ly y'rs Neil




From nas at arctrix.com  Mon Mar 19 18:38:44 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:38:44 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB64117.8D3AEBED@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 06:25:43PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com> <3AB64117.8D3AEBED@tismer.com>
Message-ID: <20010319093844.D19071@glacier.fnational.com>

On Mon, Mar 19, 2001 at 06:25:43PM +0100, Christian Tismer wrote:
> I believe this is going to be the best possible efficient
> implementation of generators.  And I'm very confident that it
> will make it into the core with ease and without the need for a
> PEP.

I sure hope not.  We need to come up with better APIs and a
better interface from Python code.  The current interface is not
efficiently implementable in Jython, AFAIK.  We also need to
figure out how to make things play nicely with stackless.  IMHO,
a PEP is required.

My plan now is to look at how stackless works as I now understand
some of the issues.  Since no stackless light patch exists
writing one may be a good learning project.  Its still a long
road to 2.2. :-)

  Neil



From tismer at tismer.com  Mon Mar 19 18:43:20 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 18:43:20 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com> <3AB64117.8D3AEBED@tismer.com> <20010319093844.D19071@glacier.fnational.com>
Message-ID: <3AB64538.15522433@tismer.com>


Neil Schemenauer wrote:
> 
> On Mon, Mar 19, 2001 at 06:25:43PM +0100, Christian Tismer wrote:
> > I believe this is going to be the best possible efficient
> > implementation of generators.  And I'm very confident that it
> > will make it into the core with ease and without the need for a
> > PEP.
> 
> I sure hope not.  We need to come up with better APIs and a
> better interface from Python code.  The current interface is not
> efficiently implementable in Jython, AFAIK.  We also need to
> figure out how to make things play nicely with stackless.  IMHO,
> a PEP is required.

Yes, sure. What I meant was not the current code, but the
simplistic, straightforward approach.

> My plan now is to look at how stackless works as I now understand
> some of the issues.  Since no stackless light patch exists
> writing one may be a good learning project.  Its still a long
> road to 2.2. :-)

Warning, *unreadable* code. If you really want to read that,
make sure to use ceval_pre.c, this comes almost without optimization.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From paulp at ActiveState.com  Mon Mar 19 18:55:36 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 19 Mar 2001 09:55:36 -0800
Subject: [Python-Dev] nondist/sandbox/typecheck
Message-ID: <3AB64818.DA458342@ActiveState.com>

Could I check in some type-checking code into nondist/sandbox? It's
quickly getting to the point where real users can start to see benefits
from it and I would like to let people play with it to convince
themselves of that.

Consider these mistaken statements:

os.path.abspath(None)
xmllib.XMLParser().feed(None)
sre.compile(".*", "I")

Here's what we used to get as tracebacks:

	os.path.abspath(None)
	(no error, any falsible value is treated as the same as the empty
string!)

	xmllib.XMLParser().feed(None)

Traceback (most recent call last):
  File "errors.py", line 8, in ?
    xmllib.XMLParser().feed(None)
  File "c:\python20\lib\xmllib.py", line 164, in feed
    self.rawdata = self.rawdata + data
TypeError: cannot add type "None" to string

	sre.compile(".*", "I")

Traceback (most recent call last):
  File "errors.py", line 12, in ?
    sre.compile(".*", "I")
  File "c:\python20\lib\sre.py", line 62, in compile
    return _compile(pattern, flags)
  File "c:\python20\lib\sre.py", line 100, in _compile
    p = sre_compile.compile(pattern, flags)
  File "c:\python20\lib\sre_compile.py", line 359, in compile
    p = sre_parse.parse(p, flags)
  File "c:\python20\lib\sre_parse.py", line 586, in parse
    p = _parse_sub(source, pattern, 0)
  File "c:\python20\lib\sre_parse.py", line 294, in _parse_sub
    items.append(_parse(source, state))
  File "c:\python20\lib\sre_parse.py", line 357, in _parse
    if state.flags & SRE_FLAG_VERBOSE:
TypeError: bad operand type(s) for &

====================

Here's what we get now:

	os.path.abspath(None)

Traceback (most recent call last):
  File "errors.py", line 4, in ?
    os.path.abspath(None)
  File "ntpath.py", line 401, in abspath
    def abspath(path):
InterfaceError: Parameter 'path' expected Unicode or 8-bit string.
Instead it got 'None' (None)

	xmllib.XMLParser().feed(None)

Traceback (most recent call last):
  File "errors.py", line 8, in ?
    xmllib.XMLParser().feed(None)
  File "xmllib.py", line 163, in feed
    def feed(self, data):
InterfaceError: Parameter 'data' expected Unicode or 8-bit string.
Instead it got 'None' (None)

	sre.compile(".*", "I")

Traceback (most recent call last):
  File "errors.py", line 12, in ?
    sre.compile(".*", "I")
  File "sre.py", line 61, in compile
    def compile(pattern, flags=0):
InterfaceError: Parameter 'flags' expected None.
Instead it got 'string' ('I')
None

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From ping at lfw.org  Mon Mar 19 22:07:10 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 19 Mar 2001 13:07:10 -0800 (PST)
Subject: [Python-Dev] Nested scopes core dump
Message-ID: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>

I just tried this:

    Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> from __future__ import nested_scopes
    >>> def f(x):
    ...     x = x + 1
    ...     a = x + 3
    ...     b = x + 5
    ...     def g(y):
    ...         def h(z):
    ...             return a, b, x, y, z
    ...         return h
    ...     return g
    ...
    Fatal Python error: non-string found in code slot
    Aborted (core dumped)

gdb says v is NULL:

    #5  0x8059cce in PyCode_New (argcount=1, nlocals=2, stacksize=5, flags=3, code=0x8144688, consts=0x8145c1c, names=0x8122974, varnames=0x8145c6c, freevars=0x80ecc14, cellvars=0x81225d4, filename=0x812f900, name=0x810c288, firstlineno=5, lnotab=0x8144af0) at Python/compile.c:279
    279             intern_strings(freevars);
    (gdb) down
    #4  0x8059b80 in intern_strings (tuple=0x80ecc14) at Python/compile.c:233
    233                             Py_FatalError("non-string found in code slot");
    (gdb) list 230
    225     static int
    226     intern_strings(PyObject *tuple)
    227     {
    228             int i;
    229
    230             for (i = PyTuple_GET_SIZE(tuple); --i >= 0; ) {
    231                     PyObject *v = PyTuple_GET_ITEM(tuple, i);
    232                     if (v == NULL || !PyString_Check(v)) {
    233                             Py_FatalError("non-string found in code slot");
    234                             PyErr_BadInternalCall();
    (gdb) print v
    $1 = (PyObject *) 0x0

Hope this helps (this test should probably be added to test_scope.py too),


-- ?!ng

Happiness comes more from loving than being loved; and often when our
affection seems wounded it is is only our vanity bleeding. To love, and
to be hurt often, and to love again--this is the brave and happy life.
    -- J. E. Buchrose 




From jeremy at alum.mit.edu  Mon Mar 19 22:09:30 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 16:09:30 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
Message-ID: <15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>

Please submit bug reports as SF bug reports.  (Thanks for finding it,
but if I don't get to it today this email does me little good.)

Jeremy



From MarkH at ActiveState.com  Mon Mar 19 22:53:29 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 20 Mar 2001 08:53:29 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <09c001c0b06d$0f359eb0$8119fea9@neil>
Message-ID: <LCEPIIGDJPKCOIHOBJEPMEEJDGAA.MarkH@ActiveState.com>

Hi Neil!

>    The "program files" and "user" directory should still have names

"should" or "will"?

> representable in the normal locale used by the user so they are able to
> access them by using their standard encoding in a Python narrow character
> string to the open function.

I dont understand what "their standard encoding" is here.  My understanding
is that "their standard encoding" is whatever WideCharToMultiByte() returns,
and this is what mbcs is.

My understanding is that their "default encoding" will bear no relationship
to encoding names as known by Python.  ie, given a user's locale, there is
no reasonable way to determine which of the Python encoding names will
always correctly work on these strings.

> > The way I see it, to fix this we have 2 basic choices when a Unicode
> object
> > is passed as a filename:
> > * we call the Unicode versions of the CRTL.
>
>    This is by far the better approach IMO as it is more general and will
> work for people who switch locales or who want to access files created by
> others using other locales. Although you can always use the horrid mangled
> "*~1" names.
>
> > * we auto-encode using the "mbcs" encoding, and still call the
> non-Unicode
> > versions of the CRTL.
>
>    This will improve things but to a lesser extent than the above. May be
> the best possible on 95.

I understand the above, but want to resist having different NT and 9x
versions of Python for obvious reasons.  I also wanted to avoid determining
at runtime if the platform has Unicode support and magically switching to
them.

I concur on the "may be the best possible on 95" and see no real downsides
on NT, other than the freak possibility of the default encoding being change
_between_ us encoding a string and the OS decoding it.

Recall that my change is only to convert from Unicode to a string so the
file system can convert back to Unicode.  There is no real opportunity for
the current locale to change on this thread during this process.

I guess I see 3 options:

1) Do nothing, thereby forcing the user to manually encode the Unicode
object.  Only by encoding the string can they access these filenames, which
means the exact same issues apply.

2) Move to Unicode APIs where available, which will be a much deeper patch
and much harder to get right on non-Unicode Windows platforms.

3) Like 1, but simply automate the encoding task.

My proposal was to do (3).  It is not clear from your mail what you propose.
Like me, you seem to agree (2) would be perfect in an ideal world, but you
also agree we don't live in one.

What is your recommendation?

Mark.




From skip at pobox.com  Mon Mar 19 22:53:56 2001
From: skip at pobox.com (Skip Montanaro)
Date: Mon, 19 Mar 2001 15:53:56 -0600 (CST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
	<15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15030.32756.969347.565911@beluga.mojam.com>

    Jeremy> Please submit bug reports as SF bug reports.  (Thanks for
    Jeremy> finding it, but if I don't get to it today this email does me
    Jeremy> little good.)

What?  You actually delete email?  Or do you have an email system that works
like Usenet? 

;-)

S





From nhodgson at bigpond.net.au  Mon Mar 19 23:52:34 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Tue, 20 Mar 2001 09:52:34 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPMEEJDGAA.MarkH@ActiveState.com>
Message-ID: <02e401c0b0c7$4a38a2a0$8119fea9@neil>

   Morning Mark,


> >    The "program files" and "user" directory should still have names
>
> "should" or "will"?

   Should. I originally wrote "will" but then thought of the scenario where
I install W2K with Russian as the default locale. The "Program Files"
directory (and other standard directories) is created with a localised name
(call it, "Russian PF" for now) including some characters not representable
in Latin 1. I then start working with a Python program and decide to change
the input locale to German. The "Russian PF" string is representable in
Unicode but not in the code page used for German so a WideCharToMultiByte
using the current code page will fail. Fail here means not that the function
will error but that a string will be constructed which will not round trip
back to Unicode and thus is unlikely to be usable to open the file.

> > representable in the normal locale used by the user so they are able to
> > access them by using their standard encoding in a Python narrow
character
> > string to the open function.
>
> I dont understand what "their standard encoding" is here.  My
understanding
> is that "their standard encoding" is whatever WideCharToMultiByte()
returns,
> and this is what mbcs is.

    WideCharToMultiByte has an explicit code page parameter so its the
caller that has to know what they want. The most common thing to do is ask
the system for the input locale and use this in the call to
WideCharToMultiByte and there are some CRT functions like wcstombs that wrap
this. Passing CP_THREAD_ACP to WideCharToMultiByte is another way. Scintilla
uses:

static int InputCodePage() {
 HKL inputLocale = ::GetKeyboardLayout(0);
 LANGID inputLang = LOWORD(inputLocale);
 char sCodePage[10];
 int res = ::GetLocaleInfo(MAKELCID(inputLang, SORT_DEFAULT),
   LOCALE_IDEFAULTANSICODEPAGE, sCodePage, sizeof(sCodePage));
 if (!res)
  return 0;
 return atoi(sCodePage);
}

   which is the result of reading various articles from MSDN and MSJ.
microsoft.public.win32.programmer.international is the news group for this
and Michael Kaplan answers a lot of these sorts of questions.

> My understanding is that their "default encoding" will bear no
relationship
> to encoding names as known by Python.  ie, given a user's locale, there is
> no reasonable way to determine which of the Python encoding names will
> always correctly work on these strings.

   Uncertain. There should be a way to get the input locale as a Python
encoding name or working on these sorts of issues will be difficult.

> Recall that my change is only to convert from Unicode to a string so the
> file system can convert back to Unicode.  There is no real opportunity for
> the current locale to change on this thread during this process.

   But the Unicode string may be non-representable using the current locale.
So doing the conversion makes the string unusable.

> My proposal was to do (3).  It is not clear from your mail what you
propose.
> Like me, you seem to agree (2) would be perfect in an ideal world, but you
> also agree we don't live in one.

   I'd prefer (2). Support Unicode well on the platforms that support it
well. Providing some help on 95 is nice but not IMO as important.

   Neil





From mwh21 at cam.ac.uk  Tue Mar 20 00:14:08 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 19 Mar 2001 23:14:08 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Ka-Ping Yee's message of "Mon, 19 Mar 2001 13:07:10 -0800 (PST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
Message-ID: <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>

Ka-Ping Yee <ping at lfw.org> writes:

> I just tried this:
> 
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> from __future__ import nested_scopes
>     >>> def f(x):
>     ...     x = x + 1
>     ...     a = x + 3
>     ...     b = x + 5
>     ...     def g(y):
>     ...         def h(z):
>     ...             return a, b, x, y, z
>     ...         return h
>     ...     return g
>     ...
>     Fatal Python error: non-string found in code slot
>     Aborted (core dumped)

Here, look at this:

static int
symtable_freevar_offsets(PyObject *freevars, int offset)
{
      PyObject *name, *v;
      int pos;

      /* The cell vars are the first elements of the closure,
         followed by the free vars.  Update the offsets in
         c_freevars to account for number of cellvars. */  
      pos = 0;
      while (PyDict_Next(freevars, &pos, &name, &v)) {
              int i = PyInt_AS_LONG(v) + offset;
              PyObject *o = PyInt_FromLong(i);
              if (o == NULL)
                      return -1;
              if (PyDict_SetItem(freevars, name, o) < 0) {
                      Py_DECREF(o);
                      return -1;
              }
              Py_DECREF(o);
      }
      return 0;
}

this modifies the dictionary you're iterating over.  This is, as they
say, a Bad Idea[*].

https://sourceforge.net/tracker/index.php?func=detail&aid=409864&group_id=5470&atid=305470

is a minimal-effort/impact fix.  I don't know the new compile.c well
enough to really judge the best fix.

Cheers,
M.

[*] I thought that if you used the same keys when you were iterating
    over a dict you were safe.  It seems not, at least as far as I
    could tell with mounds of debugging printf's.
-- 
  (Of course SML does have its weaknesses, but by comparison, a
  discussion of C++'s strengths and flaws always sounds like an
  argument about whether one should face north or east when one
  is sacrificing one's goat to the rain god.)         -- Thant Tessman




From jeremy at alum.mit.edu  Tue Mar 20 00:17:30 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 18:17:30 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
	<m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:

  MWH> [*] I thought that if you used the same keys when you were
  MWH> iterating over a dict you were safe.  It seems not, at least as
  MWH> far as I could tell with mounds of debugging printf's.

I did, too.  Anyone know what the problems is?  

Jeremy



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 20 00:16:34 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 20 Mar 2001 00:16:34 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
Message-ID: <200103192316.f2JNGYK02041@mira.informatik.hu-berlin.de>

> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
>
> * we call the Unicode versions of the CRTL.

That is the choice that I prefer. I understand that it won't work on
Win95, but I think that needs to be worked-around.

By using "Unicode versions" of an API, you are making the code
Windows-specific anyway. So I wonder whether it might be better to use
the plain API instead of the CRTL; I also wonder how difficult it
actually is to do "the right thing all the time".

On NT, the file system is defined in terms of Unicode, so passing
Unicode in and out is definitely the right thing (*). On Win9x, the
file system uses some platform specific encoding, which means that
using that encoding is the right thing. On Unix, there is no
established convention, but UTF-8 was invented exactly to deal with
Unicode in Unix file systems, so that might be appropriate choice
(**).

So I'm in favour of supporting Unicode on all file system APIs; that
does include os.listdir(). For 2.1, that may be a bit much given that
a beta release has already been seen; so only accepting Unicode on
input is what we can do now.

Regards,
Martin

(*) Converting to the current MBCS might be lossy, and it might not
support all file names. The "ASCII only" approach of 2.0 was precisely
taken to allow getting it right later; I strongly discourage any
approach that attempts to drop the restriction in a way that does not
allow to get it right later.

(**) Atleast, that is the best bet. Many Unix installations use some
other encoding in their file names; if Unicode becomes more common,
most likely installations will also use UTF-8 on their file systems.
Unless it can be established what the file system encoding is,
returning Unicode from os.listdir is probably not the right thing.



From mwh21 at cam.ac.uk  Tue Mar 20 00:44:11 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 19 Mar 2001 23:44:11 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Jeremy Hylton's message of "Mon, 19 Mar 2001 18:17:30 -0500 (EST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>

Jeremy Hylton <jeremy at alum.mit.edu> writes:

> >>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
> 
>   MWH> [*] I thought that if you used the same keys when you were
>   MWH> iterating over a dict you were safe.  It seems not, at least as
>   MWH> far as I could tell with mounds of debugging printf's.
> 
> I did, too.  Anyone know what the problems is?  

The dict's resizing, it turns out.

I note that in PyDict_SetItem, the check to see if the dict needs
resizing occurs *before* it is known whether the key is already in the
dict.  But if this is the problem, how come we haven't been bitten by
this before?

Cheers,
M.

-- 
  While preceding your entrance with a grenade is a good tactic in
  Quake, it can lead to problems if attempted at work.    -- C Hacking
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html




From jeremy at alum.mit.edu  Tue Mar 20 00:48:42 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 18:48:42 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
	<m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
	<15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>
	<m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MH" == Michael Hudson <mwh21 at cam.ac.uk> writes:

  MH> Jeremy Hylton <jeremy at alum.mit.edu> writes:
  >> >>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
  >>
  MWH> [*] I thought that if you used the same keys when you were
  MWH> iterating over a dict you were safe.  It seems not, at least as
  MWH> far as I could tell with mounds of debugging printf's.
  >>
  >> I did, too.  Anyone know what the problems is?

  MH> The dict's resizing, it turns out.

So a hack to make the iteration safe would be to assign and element
and then delete it?

  MH> I note that in PyDict_SetItem, the check to see if the dict
  MH> needs resizing occurs *before* it is known whether the key is
  MH> already in the dict.  But if this is the problem, how come we
  MH> haven't been bitten by this before?

It's probably unusual for a dictionary to be in this state when the
compiler decides to update the values.

Jeremy



From MarkH at ActiveState.com  Tue Mar 20 00:57:21 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 20 Mar 2001 10:57:21 +1100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
In-Reply-To: <200103192316.f2JNGYK02041@mira.informatik.hu-berlin.de>
Message-ID: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>

OK - it appears everyone agrees we should go the "Unicode API" route.  I
actually thought my scheme did not preclude moving to this later.

This is a much bigger can of worms than I have bandwidth to take on at the
moment.  As Martin mentions, what will os.listdir() return on Win9x vs
Win2k?  What does passing a Unicode object to a non-Unicode Win32 platform
mean? etc.  How do Win95/98/ME differ in their Unicode support?  Do the
various service packs for each of these change the basic support?

So unfortunately this simply means the status quo remains until someone
_does_ have the time and inclination.  That may well be me in the future,
but is not now.  It also means that until then, Python programmers will
struggle with this and determine that they can make it work simply by
encoding the Unicode as an "mbcs" string.  Or worse, they will note that
"latin1 seems to work" and use that even though it will work "less often"
than mbcs.  I was simply hoping to automate that encoding using a scheme
that works "most often".

The biggest drawback is that by doing nothing we are _encouraging_ the user
to write broken code.  The way things stand at the moment, the users will
_never_ pass Unicode objects to these APIs (as they dont work) and will
therefore manually encode a string.  To my mind this is _worse_ than what my
scheme proposes - at least my scheme allows Unicode objects to be passed to
the Python functions - python may choose to change the way it handles these
in the future.  But by forcing the user to encode a string we have lost
_all_ meaningful information about the Unicode object and can only hope they
got the encoding right.

If anyone else decides to take this on, please let me know.  However, I fear
that in a couple of years we may still be waiting and in the meantime people
will be coding hacks that will _not_ work in the new scheme.

c'est-la-vie-ly,

Mark.




From mwh21 at cam.ac.uk  Tue Mar 20 01:02:59 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 00:02:59 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Jeremy Hylton's message of "Mon, 19 Mar 2001 18:48:42 -0500 (EST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk>

Jeremy Hylton <jeremy at alum.mit.edu> writes:

> >>>>> "MH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
> 
>   MH> Jeremy Hylton <jeremy at alum.mit.edu> writes:
>   >> >>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
>   >>
>   MWH> [*] I thought that if you used the same keys when you were
>   MWH> iterating over a dict you were safe.  It seems not, at least as
>   MWH> far as I could tell with mounds of debugging printf's.
>   >>
>   >> I did, too.  Anyone know what the problems is?
> 
>   MH> The dict's resizing, it turns out.
> 
> So a hack to make the iteration safe would be to assign and element
> and then delete it?

Yes.  This would be gross beyond belief though.  Particularly as the
normal case is for freevars to be empty.

>   MH> I note that in PyDict_SetItem, the check to see if the dict
>   MH> needs resizing occurs *before* it is known whether the key is
>   MH> already in the dict.  But if this is the problem, how come we
>   MH> haven't been bitten by this before?
> 
> It's probably unusual for a dictionary to be in this state when the
> compiler decides to update the values.

What I meant was that there are bits and pieces of code in the Python
core that blithely pass keys gotten from PyDict_Next into
PyDict_SetItem.  From what I've just learnt, I'd expect this to
occasionally cause glitches of extreme confusing-ness.  Though on
investigation, I don't think any of these bits of code are sensitive
to getting keys out multiple times (which is what happens in this case
- though you must be able to miss keys too).  Might cause the odd leak
here and there.

Cheers,
M.

-- 
  Clue: You've got the appropriate amount of hostility for the
  Monastery, however you are metaphorically getting out of the
  safari jeep and kicking the lions.                         -- coonec
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html




From greg at cosc.canterbury.ac.nz  Tue Mar 20 01:19:35 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 20 Mar 2001 12:19:35 +1200 (NZST)
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5FCE5.92A133AB@lemburg.com>
Message-ID: <200103200019.MAA06253@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal at lemburg.com>:

> Actually opening a file in record mode and then using
> file.seek() should work on many platforms.

Not on Unix! No space is actually allocated until you
write something, regardless of where you seek to. And
then only the blocks that you touch (files can have
holes in them).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Mar 20 01:21:47 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 20 Mar 2001 12:21:47 +1200 (NZST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB60DAB.D92D12BF@tismer.com>
Message-ID: <200103200021.MAA06256@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer at tismer.com>:

> It does not
> matter how and where frames were created, it is just impossible
> to jump at a frame that is held by an interpreter on the C stack.

I think I need a clearer idea of what it means for a frame
to be "held by an interpreter".

I gather that each frame has a lock flag. How and when does
this flag get set and cleared?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Tue Mar 20 02:48:27 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 19 Mar 2001 20:48:27 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <20010319141834.X27808@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMHJGAA.tim.one@home.com>

Here's a radical suggestion:  Start a x-platform project on SourceForge,
devoted to producing a C library with a common interface for
platform-dependent crud like "how big is this file?" and "how many bytes free
on this disk?" and "how can I execute a shell command in a portable way?"
(e.g., Tcl's "exec" emulates a subset of Bourne shell syntax, including
redirection and pipes, even on Windows 3.1).

OK, that's too useful.  Nevermind ...




From tismer at tismer.com  Tue Mar 20 06:15:01 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 20 Mar 2001 06:15:01 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103200021.MAA06256@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB6E755.B39C2E62@tismer.com>


Greg Ewing wrote:
> 
> Christian Tismer <tismer at tismer.com>:
> 
> > It does not
> > matter how and where frames were created, it is just impossible
> > to jump at a frame that is held by an interpreter on the C stack.
> 
> I think I need a clearer idea of what it means for a frame
> to be "held by an interpreter".
> 
> I gather that each frame has a lock flag. How and when does
> this flag get set and cleared?

Assume a frame F being executed by an interpreter A.
Now, if this frame calls a function, which in turn
starts another interpreter B, this hides interpreter
A on the C stack. Frame F cannot be run by anything
until interpreter B is finished.
Exactly in this situation, frame F has its lock set,
to prevend crashes.
Such a locked frame cannot be a switch target.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From barry at digicool.com  Tue Mar 20 06:12:17 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Tue, 20 Mar 2001 00:12:17 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>
Message-ID: <15030.59057.866982.538935@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at python.org> writes:

    GvR> So I see little chance for PEP 224.  Maybe I should just
    GvR> pronounce on this, and declare the PEP rejected.

So, was that a BDFL pronouncement or not? :)

-Barry



From tim_one at email.msn.com  Tue Mar 20 06:57:23 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 20 Mar 2001 00:57:23 -0500
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <200103191312.IAA25747@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGENHJGAA.tim_one@email.msn.com>

[Mark Hammond]
> * os.listdir() returns '\xe0test\xf2' for this file.

[Guido]
> I don't understand.  This is a Latin-1 string.  Can you explain again
> how the MBCS encoding encodes characters outside the Latin-1 range?

I expect this is a coincidence.  MBCS is a generic term for a large number of
distinct variable-length encoding schemes, one or more specific to each
language.  Latin-1 is a subset of some MBCS schemes, but not of others; Mark
was using a German mblocale, right?  Across MS's set of MBCS schemes, there's
little consistency:  a one-byte encoding in one of them may well be a "lead
byte" (== the first byte of a two-byte encoding) in another.

All this stuff is hidden under layers of macros so general that, if you code
it right, you can switch between compiling MBCS code on Win95 and Unicode
code on NT via setting one compiler #define.  Or that's what they advertise.
The multi-lingual Windows app developers at my previous employer were all
bald despite being no older than 23 <wink>.

ascii-boy-ly y'rs  - tim




From tim_one at email.msn.com  Tue Mar 20 07:31:49 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 20 Mar 2001 01:31:49 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010319084534.A18938@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>

[Neil Schemenauer]
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

Note that the "compare fringes of two trees" example is a classic not because
it's inherently interesting, but because it distills the essence of a
particular *class* of problem (that's why it's popular with academics).

In Icon you need to create co-expressions to solve this problem, because its
generators aren't explicitly resumable, and Icon has no way to spell "kick a
pair of generators in lockstep".  But explicitly resumable generators are in
fact "good enough" for this classic example, which is usually used to
motivate coroutines.

I expect this relates to the XLST/XSLT/whatever-the-heck-it-was example:  if
Paul thought iterators were the bee's knees there, I *bet* in glorious
ignorance that iterators implemented via Icon-style generators would be the
bee's pajamas.

Of course Christian is right that you have to prevent a suspended frame from
getting activated more than once simultaneously; but that's detectable, and
should be considered a programmer error if it happens.




From fredrik at pythonware.com  Tue Mar 20 08:00:51 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 08:00:51 +0100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>
Message-ID: <003a01c0b10b$80e6a650$e46940d5@hagrid>

Mark Hammond wrote:
> OK - it appears everyone agrees we should go the "Unicode API" route.

well, I'd rather play with a minimal (mbcs) patch now, than wait another
year or so for a full unicodification, so if you have the time...

Cheers /F




From tim.one at home.com  Tue Mar 20 08:08:53 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 02:08:53 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: <200103190709.AAA10053@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCMENKJGAA.tim.one@home.com>

[Uche Ogbuji]
> Quite interesting.  I brought up this *exact* point at the
> Stackless BOF at IPC9.  I mentioned that the immediate reason
> I was interested in Stackless was to supercharge the efficiency
> of 4XSLT.  I think that a stackless 4XSLT could pretty much
> annihilate the other processors in the field for performance.

Hmm.  I'm interested in clarifying the cost/performance boundaries of the
various approaches.  I don't understand XSLT (I don't even know what it is).
Do you grok the difference between full-blown Stackless and Icon-style
generators?  The correspondent I quoted believed the latter were on-target
for XSLT work, and given the way Python works today generators are easier to
implement than full-blown Stackless.  But while I can speak with some
confidence about the latter, I don't know whether they're sufficient for what
you have in mind.

If this is some flavor of one-at-time tree-traversal algorithm, generators
should suffice.

class TreeNode:
    # with self.value
    #      self.children, a list of TreeNode objects
    ...
    def generate_kids(self):  # pre-order traversal
        suspend self.value
        for kid in self.children:
            for itskids in kid.generate_kids():
                suspend itskids

for k in someTreeNodeObject.generate_kids():
    print k

So the control-flow is thoroughly natural, but you can only suspend to your
immediate invoker (in recursive traversals, this "walks up the chain" of
generators for each result).  With explicitly resumable generator objects,
multiple trees (or even general graphs -- doesn't much matter) can be
traversed in lockstep (or any other interleaving that's desired).

Now decide <wink>.





From fredrik at pythonware.com  Tue Mar 20 08:36:59 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 08:36:59 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <LNBBLJKPBEHFEDALKOLCMEMHJGAA.tim.one@home.com>
Message-ID: <017a01c0b110$8d132890$e46940d5@hagrid>

tim wrote:
> Here's a radical suggestion:  Start a x-platform project on SourceForge,
> devoted to producing a C library with a common interface for
> platform-dependent crud like "how big is this file?" and "how many bytes free
> on this disk?" and "how can I execute a shell command in a portable way?"
> (e.g., Tcl's "exec" emulates a subset of Bourne shell syntax, including
> redirection and pipes, even on Windows 3.1).

counter-suggestion:

add partial os.statvfs emulation to the posix module for Windows
(and Mac), and write helpers for shutil to do the fancy stuff you
mentioned before.

Cheers /F




From tim.one at home.com  Tue Mar 20 09:30:18 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 03:30:18 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <017a01c0b110$8d132890$e46940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com>

[Fredrik Lundh]
> counter-suggestion:
>
> add partial os.statvfs emulation to the posix module for Windows
> (and Mac), and write helpers for shutil to do the fancy stuff you
> mentioned before.

One of the best things Python ever did was to introduce os.path.getsize() +
friends, saving the bulk of the world from needing to wrestle with the
obscure Unix stat() API.  os.chmod() is another x-platform teachability pain;
if there's anything worth knowing in the bowels of statvfs(), let's please
spell it in a human-friendly way from the start.




From fredrik at effbot.org  Tue Mar 20 09:58:53 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Tue, 20 Mar 2001 09:58:53 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com>
Message-ID: <01ec01c0b11b$ff9593c0$e46940d5@hagrid>

Tim Peters wrote:
> One of the best things Python ever did was to introduce os.path.getsize() +
> friends, saving the bulk of the world from needing to wrestle with the
> obscure Unix stat() API.

yup (I remember lobbying for those years ago), but that doesn't
mean that we cannot make already existing low-level APIs work
on as many platforms as possible...

(just like os.popen etc)

adding os.statvfs for windows is pretty much a bug fix (for 2.1?),
but adding a new API is not (2.2).

> os.chmod() is another x-platform teachability pain

shutil.chmod("file", "g+x"), anyone?

> if there's anything worth knowing in the bowels of statvfs(), let's
> please spell it in a human-friendly way from the start.

how about os.path.getfreespace("path") and
os.path.gettotalspace("path") ?

Cheers /F




From fredrik at pythonware.com  Tue Mar 20 13:07:23 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 13:07:23 +0100
Subject: [Python-Dev] sys.prefix woes
Message-ID: <04e601c0b136$52ee8e90$0900a8c0@SPIFF>

(windows, 2.0)

it looks like sys.prefix isn't set unless 1) PYTHONHOME is set, or
2) lib/os.py can be found somewhere between the directory your
executable is found in, and the root.

if neither is set, the path is taken from the registry, but sys.prefix
is left blank, and FixTk.py no longer works.

any ideas?  is this a bug?  is there an "official" workaround that
doesn't involve using the time machine to upgrade all BeOpen
and ActiveState kits?

Cheers /F




From guido at digicool.com  Tue Mar 20 13:48:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 07:48:09 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 00:02:59 GMT."
             <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> 
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>  
            <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103201248.HAA29485@cj20424-a.reston1.va.home.com>

> >   MH> The dict's resizing, it turns out.
> > 
> > So a hack to make the iteration safe would be to assign and element
> > and then delete it?
> 
> Yes.  This would be gross beyond belief though.  Particularly as the
> normal case is for freevars to be empty.
> 
> >   MH> I note that in PyDict_SetItem, the check to see if the dict
> >   MH> needs resizing occurs *before* it is known whether the key is
> >   MH> already in the dict.  But if this is the problem, how come we
> >   MH> haven't been bitten by this before?
> > 
> > It's probably unusual for a dictionary to be in this state when the
> > compiler decides to update the values.
> 
> What I meant was that there are bits and pieces of code in the Python
> core that blithely pass keys gotten from PyDict_Next into
> PyDict_SetItem.

Where?

> From what I've just learnt, I'd expect this to
> occasionally cause glitches of extreme confusing-ness.  Though on
> investigation, I don't think any of these bits of code are sensitive
> to getting keys out multiple times (which is what happens in this case
> - though you must be able to miss keys too).  Might cause the odd leak
> here and there.

I'd fix the dict implementation, except that that's tricky.

Checking for a dup key in PyDict_SetItem() before calling dictresize()
slows things down.  Checking in insertdict() is wrong because
dictresize() uses that!

Jeremy, is there a way that you could fix your code to work around
this?  Let's talk about this when you get into the office.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 20 14:03:42 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 08:03:42 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
In-Reply-To: Your message of "Tue, 20 Mar 2001 00:12:17 EST."
             <15030.59057.866982.538935@anthem.wooz.org> 
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>  
            <15030.59057.866982.538935@anthem.wooz.org> 
Message-ID: <200103201303.IAA29601@cj20424-a.reston1.va.home.com>

> >>>>> "GvR" == Guido van Rossum <guido at python.org> writes:
> 
>     GvR> So I see little chance for PEP 224.  Maybe I should just
>     GvR> pronounce on this, and declare the PEP rejected.
> 
> So, was that a BDFL pronouncement or not? :)
> 
> -Barry

Yes it was.  I really don't like the syntax, the binding between the
docstring and the documented identifier is too weak.  It's best to do
this explicitly, e.g.

    a = 12*12
    __doc_a__ = """gross"""

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Tue Mar 20 14:30:10 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 13:30:10 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Guido van Rossum's message of "Tue, 20 Mar 2001 07:48:09 -0500"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>
Message-ID: <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> > >   MH> The dict's resizing, it turns out.
> > > 
> > > So a hack to make the iteration safe would be to assign and element
> > > and then delete it?
> > 
> > Yes.  This would be gross beyond belief though.  Particularly as the
> > normal case is for freevars to be empty.
> > 
> > >   MH> I note that in PyDict_SetItem, the check to see if the dict
> > >   MH> needs resizing occurs *before* it is known whether the key is
> > >   MH> already in the dict.  But if this is the problem, how come we
> > >   MH> haven't been bitten by this before?
> > > 
> > > It's probably unusual for a dictionary to be in this state when the
> > > compiler decides to update the values.
> > 
> > What I meant was that there are bits and pieces of code in the Python
> > core that blithely pass keys gotten from PyDict_Next into
> > PyDict_SetItem.
> 
> Where?

import.c:PyImport_Cleanup
moduleobject.c:_PyModule_Clear

Hrm, I was sure there were more than that, but there don't seem to be.
Sorry for the alarmism.

> > From what I've just learnt, I'd expect this to
> > occasionally cause glitches of extreme confusing-ness.  Though on
> > investigation, I don't think any of these bits of code are sensitive
> > to getting keys out multiple times (which is what happens in this case
> > - though you must be able to miss keys too).  Might cause the odd leak
> > here and there.
> 
> I'd fix the dict implementation, except that that's tricky.

I'd got that far...

> Checking for a dup key in PyDict_SetItem() before calling dictresize()
> slows things down.  Checking in insertdict() is wrong because
> dictresize() uses that!

Maybe you could do the check for resize *after* the call to
insertdict?  I think that would work, but I wouldn't like to go
messing with such a performance critical bit of code without some
careful thinking.

Cheers,
M.

-- 
  You sound surprised.  We're talking about a government department
  here - they have procedures, not intelligence.
                                            -- Ben Hutchings, cam.misc




From mwh21 at cam.ac.uk  Tue Mar 20 14:44:50 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 13:44:50 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Michael Hudson's message of "20 Mar 2001 13:30:10 +0000"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com> <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <m3ae6gh7vx.fsf@atrus.jesus.cam.ac.uk>

Michael Hudson <mwh21 at cam.ac.uk> writes:

> Guido van Rossum <guido at digicool.com> writes:
> 
> > Checking for a dup key in PyDict_SetItem() before calling dictresize()
> > slows things down.  Checking in insertdict() is wrong because
> > dictresize() uses that!
> 
> Maybe you could do the check for resize *after* the call to
> insertdict?  I think that would work, but I wouldn't like to go
> messing with such a performance critical bit of code without some
> careful thinking.

Indeed; this tiny little patch:

Index: Objects/dictobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/dictobject.c,v
retrieving revision 2.73
diff -c -r2.73 dictobject.c
*** Objects/dictobject.c	2001/01/18 00:39:02	2.73
--- Objects/dictobject.c	2001/03/20 13:38:04
***************
*** 496,501 ****
--- 496,508 ----
  	Py_INCREF(value);
  	Py_INCREF(key);
  	insertdict(mp, key, hash, value);
+ 	/* if fill >= 2/3 size, double in size */
+ 	if (mp->ma_fill*3 >= mp->ma_size*2) {
+ 		if (dictresize(mp, mp->ma_used*2) != 0) {
+ 			if (mp->ma_fill+1 > mp->ma_size)
+ 				return -1;
+ 		}
+ 	}
  	return 0;
  }
  
fixes Ping's reported crash.  You can't naively (as I did at first)
*only* check after the insertdict, 'cause dicts are created with 0
size.

Currently building from scratch to do some performance testing.

Cheers,
M.

-- 
  It's a measure of how much I love Python that I moved to VA, where
  if things don't work out Guido will buy a plantation and put us to
  work harvesting peanuts instead.     -- Tim Peters, comp.lang.python




From fredrik at pythonware.com  Tue Mar 20 14:58:29 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 14:58:29 +0100
Subject: [Python-Dev] sys.prefix woes
References: <04e601c0b136$52ee8e90$0900a8c0@SPIFF>
Message-ID: <054e01c0b145$d9d727f0$0900a8c0@SPIFF>

I wrote:
> any ideas?  is this a bug?  is there an "official" workaround that
> doesn't involve using the time machine to upgrade all BeOpen
> and ActiveState kits?

I found a workaround (a place to put some app-specific python code
that runs before anyone actually attempts to use sys.prefix)

still looks like a bug, though.  I'll post it to sourceforge.

Cheers /F




From guido at digicool.com  Tue Mar 20 15:32:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 09:32:00 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 13:30:10 GMT."
             <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>  
            <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103201432.JAA00360@cj20424-a.reston1.va.home.com>

> > Checking for a dup key in PyDict_SetItem() before calling dictresize()
> > slows things down.  Checking in insertdict() is wrong because
> > dictresize() uses that!
> 
> Maybe you could do the check for resize *after* the call to
> insertdict?  I think that would work, but I wouldn't like to go
> messing with such a performance critical bit of code without some
> careful thinking.

No, that could still decide to resize, couldn't it?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 20 15:33:20 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 09:33:20 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 13:30:10 GMT."
             <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>  
            <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103201433.JAA00373@cj20424-a.reston1.va.home.com>

Ah, the solution is simple.  Check for identical keys only when about
to resize:

	/* if fill >= 2/3 size, double in size */
	if (mp->ma_fill*3 >= mp->ma_size*2) {
		***** test here *****
		if (dictresize(mp, mp->ma_used*2) != 0) {
			if (mp->ma_fill+1 > mp->ma_size)
				return -1;
		}
	}

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Tue Mar 20 16:13:35 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 15:13:35 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Guido van Rossum's message of "Tue, 20 Mar 2001 09:33:20 -0500"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com> <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> <200103201433.JAA00373@cj20424-a.reston1.va.home.com>
Message-ID: <m34rwoh3s0.fsf@atrus.jesus.cam.ac.uk>

Does anyone know how to reply to two messages gracefully in gnus?

Guido van Rossum <guido at digicool.com> writes:

> > Maybe you could do the check for resize *after* the call to
> > insertdict?  I think that would work, but I wouldn't like to go
> > messing with such a performance critical bit of code without some
> > careful thinking.
>
> No, that could still decide to resize, couldn't it?

Yes, but not when you're inserting on a key that is already in the
dictionary - because the resize would have happened when the key was
inserted into the dictionary, and thus the problem we're seeing here
wouldn't happen.

What's happening in Ping's test case is that the dict is in some sense
being prepped to resize when an item is added but not actually
resizing until PyDict_SetItem is called again, which is unfortunately
inside a PyDict_Next loop.

Guido van Rossum <guido at digicool.com> writes:

> Ah, the solution is simple.  Check for identical keys only when about
> to resize:
> 
> 	/* if fill >= 2/3 size, double in size */
> 	if (mp->ma_fill*3 >= mp->ma_size*2) {
> 		***** test here *****
> 		if (dictresize(mp, mp->ma_used*2) != 0) {
> 			if (mp->ma_fill+1 > mp->ma_size)
> 				return -1;
> 		}
> 	}

This might also do nasty things to performance - this code path gets
travelled fairly often for small dicts.

Does anybody know the average (mean/mode/median) size for dicts in
a "typical" python program?

  -------

Using mal's pybench with and without the patch I posted shows a 0.30%
slowdown, including these interesting lines:

                  DictCreation:    1662.80 ms   11.09 us  +34.23%
        SimpleDictManipulation:     764.50 ms    2.55 us  -15.67%

DictCreation repeatedly creates dicts of size 0 and 3.
SimpleDictManipulation repeatedly adds six elements to a dict and then
deletes them again.

Dicts of size 3 are likely to be the worst case wrt. my patch; without
it, they will have a ma_fill of 3 and a ma_size of 4 (but calling
PyDict_SetItem again will trigger a resize - this is what happens in
Ping's example), but with my patch they will always have an ma_fill of
3 and a ma_size of 8.  Hence why the DictCreation is so much worse,
and why I asked the question about average dict sizes.

Mind you, 6 is a similar edge case, so I don't know why
SimpleDictManipulation does better.  Maybe something to do with
collisions or memory behaviour.

Cheers,
M.

-- 
  I don't remember any dirty green trousers.
                                             -- Ian Jackson, ucam.chat




From skip at pobox.com  Tue Mar 20 16:19:54 2001
From: skip at pobox.com (Skip Montanaro)
Date: Tue, 20 Mar 2001 09:19:54 -0600 (CST)
Subject: [Python-Dev] zipfile.py - detect if zipinfo is a dir  (fwd)
Message-ID: <15031.29978.95112.488244@beluga.mojam.com>

Not sure why I received this note.  I am passing it along to Jim Ahlstrom
and python-dev.

Skip

-------------- next part --------------
An embedded message was scrubbed...
From: Stephane Matamontero <dev1.gemodek at t-online.de>
Subject: zipfile.py - detect if zipinfo is a dir 
Date: Tue, 20 Mar 2001 06:39:27 -0800
Size: 2485
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010320/5070f250/attachment.eml>

From tim.one at home.com  Tue Mar 20 17:01:21 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 11:01:21 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m34rwoh3s0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEONJGAA.tim.one@home.com>

[Michael Hudson]
>>> Maybe you could do the check for resize *after* the call to
>>> insertdict?  I think that would work, but I wouldn't like to go
>>> messing with such a performance critical bit of code without some
>>> careful thinking.

[Guido]
>> No, that could still decide to resize, couldn't it?

[Michael]
> Yes, but not when you're inserting on a key that is already in the
> dictionary - because the resize would have happened when the key was
> inserted into the dictionary, and thus the problem we're seeing here
> wouldn't happen.

Careful:  this comment is only half the truth:

	/* if fill >= 2/3 size, double in size */

The dictresize following is also how dicts *shrink*.  That is, build up a
dict, delete a whole bunch of keys, and nothing at all happens to the size
until you call setitem again (actually, I think you need to call it more than
once -- the behavior is tricky).  In any case, that a key is already in the
dict does not guarantee that a dict won't resize (via shrinking) when doing a
setitem.

We could bite the bullet and add a new PyDict_AdjustSize function, just
duplicating the resize logic.  Then loops that know they won't be changing
the size can call that before starting.  Delicate, though.




From jim at interet.com  Tue Mar 20 18:42:11 2001
From: jim at interet.com (James C. Ahlstrom)
Date: Tue, 20 Mar 2001 12:42:11 -0500
Subject: [Python-Dev] Re: zipfile.py - detect if zipinfo is a dir  (fwd)
References: <15031.29978.95112.488244@beluga.mojam.com>
Message-ID: <3AB79673.C29C0BBE@interet.com>

Skip Montanaro wrote:
> 
> Not sure why I received this note.  I am passing it along to Jim Ahlstrom
> and python-dev.

Thanks.  I will look into it.

JimA



From fredrik at pythonware.com  Tue Mar 20 20:20:38 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 20:20:38 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF> <3AB62EAE.FCFD7C9F@lemburg.com>
Message-ID: <048401c0b172$dd6892a0$e46940d5@hagrid>

mal wrote:

>         return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

F_FRAVAIL, not F_BAVAIL

(and my plan is to make a statvfs subset available on
all platforms, which makes your code even simpler...)

Cheers /F




From jack at oratrix.nl  Tue Mar 20 21:34:51 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 20 Mar 2001 21:34:51 +0100
Subject: [Python-Dev] Test for case-sensitive imports?
Message-ID: <20010320203457.3A72EEA11D@oratrix.oratrix.nl>

Hmm, apparently the flurry of changes to the case-checking code in
import has broken the case-checks for the macintosh. I'll fix that,
but maybe we should add a testcase for case-sensitive import?

And a related point: the logic for determining whether to use a
mac-specific, windows-specific or unix-specific routine in the getpass 
module is error prone.

Why these two points are related is left as an exercise to the reader:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From jack at oratrix.nl  Tue Mar 20 21:47:37 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 20 Mar 2001 21:47:37 +0100
Subject: [Python-Dev] test_coercion failing
Message-ID: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>

Test_coercion fails on the Mac (current CVS sources) with
We expected (repr): '(1+0j)'
But instead we got: '(1-0j)'
test test_coercion failed -- Writing: '(1-0j)', expected: '(1+0j)'

The computation it was doing was "2 / (2+0j) =".

To my mathematical eye it shouldn't be complaining in the first place, 
but I assume this may be either a missing round() somewhere or a
symptom of a genuine bug.

Can anyone point me in the right direction?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From guido at digicool.com  Tue Mar 20 22:00:26 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 16:00:26 -0500
Subject: [Python-Dev] Test for case-sensitive imports?
In-Reply-To: Your message of "Tue, 20 Mar 2001 21:34:51 +0100."
             <20010320203457.3A72EEA11D@oratrix.oratrix.nl> 
References: <20010320203457.3A72EEA11D@oratrix.oratrix.nl> 
Message-ID: <200103202100.QAA01606@cj20424-a.reston1.va.home.com>

> Hmm, apparently the flurry of changes to the case-checking code in
> import has broken the case-checks for the macintosh. I'll fix that,
> but maybe we should add a testcase for case-sensitive import?

Thanks -- yes, please add a testcase!  ("import String" should do it,
right? :-)

> And a related point: the logic for determining whether to use a
> mac-specific, windows-specific or unix-specific routine in the getpass 
> module is error prone.

Can you fix that too?

> Why these two points are related is left as an exercise to the reader:-)

:-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Tue Mar 20 22:03:40 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 20 Mar 2001 22:03:40 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com> <01ec01c0b11b$ff9593c0$e46940d5@hagrid>
Message-ID: <3AB7C5AC.DE61F186@lemburg.com>

Fredrik Lundh wrote:
> 
> Tim Peters wrote:
> > One of the best things Python ever did was to introduce os.path.getsize() +
> > friends, saving the bulk of the world from needing to wrestle with the
> > obscure Unix stat() API.
> 
> yup (I remember lobbying for those years ago), but that doesn't
> mean that we cannot make already existing low-level APIs work
> on as many platforms as possible...
> 
> (just like os.popen etc)
> 
> adding os.statvfs for windows is pretty much a bug fix (for 2.1?),
> but adding a new API is not (2.2).
> 
> > os.chmod() is another x-platform teachability pain
> 
> shutil.chmod("file", "g+x"), anyone?

Wasn't shutil declared obsolete ?
 
> > if there's anything worth knowing in the bowels of statvfs(), let's
> > please spell it in a human-friendly way from the start.
> 
> how about os.path.getfreespace("path") and
> os.path.gettotalspace("path") ?

Anybody care to add the missing parts in:

import sys,os

try:
    os.statvfs

except AttributeError:
    # Win32 implementation...
    # Mac implementation...
    pass

else:
    import statvfs

    def freespace(path):
        """ freespace(path) -> integer
        Return the number of bytes available to the user on the file system
        pointed to by path."""
        s = os.statvfs(path)
        return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

if __name__=='__main__':
    path = sys.argv[1]
    print 'Free space on %s: %i kB (%i bytes)' % (path,
                                                  freespace(path) / 1024,
                                                  freespace(path))


totalspace() should be just as easy to add and I'm pretty
sure that you can get that information on *all* platforms
(not necessarily using the same APIs though).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at digicool.com  Tue Mar 20 22:16:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 16:16:32 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: Your message of "Tue, 20 Mar 2001 21:47:37 +0100."
             <20010320204742.BC08AEA11D@oratrix.oratrix.nl> 
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> 
Message-ID: <200103202116.QAA01770@cj20424-a.reston1.va.home.com>

> Test_coercion fails on the Mac (current CVS sources) with
> We expected (repr): '(1+0j)'
> But instead we got: '(1-0j)'
> test test_coercion failed -- Writing: '(1-0j)', expected: '(1+0j)'
> 
> The computation it was doing was "2 / (2+0j) =".
> 
> To my mathematical eye it shouldn't be complaining in the first place, 
> but I assume this may be either a missing round() somewhere or a
> symptom of a genuine bug.
> 
> Can anyone point me in the right direction?

Tim admits that he changed complex division and repr().  So that's
where you might want to look.  If you wait a bit, Tim will check his
algorithm to see if a "minus zero" can pop out of it.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at rahul.net  Tue Mar 20 22:38:27 2001
From: aahz at rahul.net (Aahz Maruch)
Date: Tue, 20 Mar 2001 13:38:27 -0800 (PST)
Subject: [Python-Dev] Function in os module for available disk space, why
In-Reply-To: <3AB7C5AC.DE61F186@lemburg.com> from "M.-A. Lemburg" at Mar 20, 2001 10:03:40 PM
Message-ID: <20010320213828.2D30F99C80@waltz.rahul.net>

M.-A. Lemburg wrote:
> 
> Wasn't shutil declared obsolete ?

<blink>  What?!
-- 
                      --- Aahz (@pobox.com)

Hugs and backrubs -- I break Rule 6             http://www.rahul.net/aahz
Androgynous poly kinky vanilla queer het

I don't really mind a person having the last whine, but I do mind
someone else having the last self-righteous whine.



From paul at pfdubois.com  Wed Mar 21 00:56:06 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Tue, 20 Mar 2001 15:56:06 -0800
Subject: [Python-Dev] PEP 242 Released
Message-ID: <ADEOIFHFONCLEEPKCACCGEANCHAA.paul@pfdubois.com>

PEP: 242
Title: Numeric Kinds
Version: $Revision: 1.1 $
Author: paul at pfdubois.com (Paul F. Dubois)
Status: Draft
Type: Standards Track
Created: 17-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    This proposal gives the user optional control over the precision
    and range of numeric computations so that a computation can be
    written once and run anywhere with at least the desired precision
    and range.  It is backward compatible with existing code.  The
    meaning of decimal literals is clarified.


Rationale

    Currently it is impossible in every language except Fortran 90 to
    write a program in a portable way that uses floating point and
    gets roughly the same answer regardless of platform -- or refuses
    to compile if that is not possible.  Python currently has only one
    floating point type, equal to a C double in the C implementation.

    No type exists corresponding to single or quad floats.  It would
    complicate the language to try to introduce such types directly
    and their subsequent use would not be portable.  This proposal is
    similar to the Fortran 90 "kind" solution, adapted to the Python
    environment.  With this facility an entire calculation can be
    switched from one level of precision to another by changing a
    single line.  If the desired precision does not exist on a
    particular machine, the program will fail rather than get the
    wrong answer.  Since coding in this style would involve an early
    call to the routine that will fail, this is the next best thing to
    not compiling.


Supported Kinds

    Each Python compiler may define as many "kinds" of integer and
    floating point numbers as it likes, except that it must support at
    least two kinds of integer corresponding to the existing int and
    long, and must support at least one kind of floating point number,
    equivalent to the present float.  The range and precision of the
    these kinds are processor dependent, as at present, except for the
    "long integer" kind, which can hold an arbitrary integer.  The
    built-in functions int(), float(), long() and complex() convert
    inputs to these default kinds as they do at present.  (Note that a
    Unicode string is actually a different "kind" of string and that a
    sufficiently knowledgeable person might be able to expand this PEP
    to cover that case.)

    Within each type (integer, floating, and complex) the compiler
    supports a linearly-ordered set of kinds, with the ordering
    determined by the ability to hold numbers of an increased range
    and/or precision.


Kind Objects

    Three new standard functions are defined in a module named
    "kinds".  They return callable objects called kind objects.  Each
    int or floating kind object f has the signature result = f(x), and
    each complex kind object has the signature result = f(x, y=0.).

    int_kind(n)
        For n >= 1, return a callable object whose result is an
        integer kind that will hold an integer number in the open
        interval (-10**n,10**n).  This function always succeeds, since
        it can return the 'long' kind if it has to. The kind object
        accepts arguments that are integers including longs.  If n ==
        0, returns the kind object corresponding to long.

    float_kind(nd, n)
        For nd >= 0 and n >= 1, return a callable object whose result
        is a floating point kind that will hold a floating-point
        number with at least nd digits of precision and a base-10
        exponent in the open interval (-n, n).  The kind object
        accepts arguments that are integer or real.

    complex_kind(nd, n)
        Return a callable object whose result is a complex kind that
        will will hold a complex number each of whose components
        (.real, .imag) is of kind float_kind(nd, n).  The kind object
        will accept one argument that is integer, real, or complex, or
        two arguments, each integer or real.

    The compiler will return a kind object corresponding to the least
    of its available set of kinds for that type that has the desired
    properties.  If no kind with the desired qualities exists in a
    given implementation an OverflowError exception is thrown.  A kind
    function converts its argument to the target kind, but if the
    result does not fit in the target kind's range, an OverflowError
    exception is thrown.

    Kind objects also accept a string argument for conversion of
    literal notation to their kind.

    Besides their callable behavior, kind objects have attributes
    giving the traits of the kind in question.  The list of traits
    needs to be completed.


The Meaning of Literal Values

    Literal integer values without a trailing L are of the least
    integer kind required to represent them.  An integer literal with
    a trailing L is a long.  Literal decimal values are of the
    greatest available binary floating-point kind.


Concerning Infinite Floating Precision

    This section makes no proposals and can be omitted from
    consideration.  It is for illuminating an intentionally
    unimplemented 'corner' of the design.

    This PEP does not propose the creation of an infinite precision
    floating point type, just leaves room for it.  Just as int_kind(0)
    returns the long kind object, if in the future an infinitely
    precise decimal kind is available, float_kind(0,0) could return a
    function that converts to that type.  Since such a kind function
    accepts string arguments, programs could then be written that are
    completely precise.  Perhaps in analogy to r'a raw string', 1.3r
    might be available as syntactic sugar for calling the infinite
    floating kind object with argument '1.3'.  r could be thought of
    as meaning 'rational'.


Complex numbers and kinds

    Complex numbers are always pairs of floating-point numbers with
    the same kind.  A Python compiler must support a complex analog of
    each floating point kind it supports, if it supports complex
    numbers at all.


Coercion

    In an expression, coercion between different kinds is to the
    greater kind.  For this purpose, all complex kinds are "greater
    than" all floating-point kinds, and all floating-point kinds are
    "greater than" all integer kinds.


Examples

    In module myprecision.py:

        import kinds
        tinyint = kinds.int_kind(1)
        single = kinds.float_kind(6, 90)
        double = kinds.float_kind(15, 300)
        csingle = kinds.complex_kind(6, 90)

    In the rest of my code:

        from myprecision import tinyint, single, double, csingle
        n = tinyint(3)
        x = double(1.e20)
        z = 1.2
        # builtin float gets you the default float kind, properties unknown
        w = x * float(x)
        w = x * double(z)
        u = csingle(x + z * 1.0j)
        u2 = csingle(x+z, 1.0)

    Note how that entire code can then be changed to a higher
    precision by changing the arguments in myprecision.py.

    Comment: note that you aren't promised that single != double; but
    you are promised that double(1.e20) will hold a number with 15
    decimal digits of precision and a range up to 10**300 or that the
    float_kind call will fail.


Open Issues

    The assertion that a decimal literal means a binary floating-point
    value of the largest available kind is in conflict with other
    proposals about Python's numeric model.  This PEP asserts that
    these other proposals are wrong and that part of them should not
    be implemented.

    Determine the exact list of traits for integer and floating point
    numbers.  There are some standard Fortran routines that do this
    but I have to track them down.  Also there should be information
    sufficient to create a Numeric array of an equal or greater kind.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:




From biotechstox23 at excite.com  Tue Mar 20 18:09:52 2001
From: biotechstox23 at excite.com (biotechstox23 at excite.com)
Date: Tue, 20 Mar 2001 18:09:52
Subject: [Python-Dev] FREE Biotech Stock Info!    933
Message-ID: <309.140226.543818@excite.com>

An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010320/f699b97f/attachment.htm>

From tim.one at home.com  Wed Mar 21 04:33:15 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 22:33:15 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>

Everyone!  Run this program under current CVS:

x = 0.0
print "%.17g" % -x
print "%+.17g" % -x

What do you get?  WinTel prints "0" for the first and "+0" for the second.

C89 doesn't define the results.

C99 requires "-0" for both (on boxes with signed floating zeroes, which is
virtually all boxes today due to IEEE 754).

I don't want to argue the C rules, I just want to know whether this *does*
vary across current platforms.




From tim.one at home.com  Wed Mar 21 04:46:04 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 22:46:04 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <200103202116.QAA01770@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBDJHAA.tim.one@home.com>

[Guido]
> ...
> If you wait a bit, Tim will check his algorithm to see if
> a "minus zero" can pop out of it.

I'm afraid Jack will have to work harder than that.  He should have gotten a
minus 0 out of this one if and only if he got a minus 0 before, and under 754
rules he *will* get a minus 0 if and only if he told his 754 hardware to use
its "to minus infinity" rounding mode.

Is test_coercion failing on any platform other than Macintosh?




From tim.one at home.com  Wed Mar 21 05:01:13 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 23:01:13 -0500
Subject: [Python-Dev] Test for case-sensitive imports?
In-Reply-To: <200103202100.QAA01606@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBEJHAA.tim.one@home.com>

[ Guido van Rossum]
> Hmm, apparently the flurry of changes to the case-checking code in
> import has broken the case-checks for the macintosh.

Hmm.  This should have been broken way back in 2.1a1, as the code you later
repaired was introduced by the first release of Mac OS X changes.  Try to
stay more current in the future <wink>.

> I'll fix that, but maybe we should add a testcase for
> case-sensitive import?

Yup!  Done now.




From uche.ogbuji at fourthought.com  Wed Mar 21 05:23:01 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Tue, 20 Mar 2001 21:23:01 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from "Tim Peters" <tim.one@home.com> 
   of "Tue, 20 Mar 2001 02:08:53 EST." <LNBBLJKPBEHFEDALKOLCMENKJGAA.tim.one@home.com> 
Message-ID: <200103210423.VAA20300@localhost.localdomain>

> [Uche Ogbuji]
> > Quite interesting.  I brought up this *exact* point at the
> > Stackless BOF at IPC9.  I mentioned that the immediate reason
> > I was interested in Stackless was to supercharge the efficiency
> > of 4XSLT.  I think that a stackless 4XSLT could pretty much
> > annihilate the other processors in the field for performance.
> 
> Hmm.  I'm interested in clarifying the cost/performance boundaries of the
> various approaches.  I don't understand XSLT (I don't even know what it is).
> Do you grok the difference between full-blown Stackless and Icon-style
> generators?

To a decent extent, based on reading your posts carefully.

> The correspondent I quoted believed the latter were on-target
> for XSLT work, and given the way Python works today generators are easier to
> implement than full-blown Stackless.  But while I can speak with some
> confidence about the latter, I don't know whether they're sufficient for what
> you have in mind.

Based on a discussion with Christian at IPC9, they are.  I should have been 
more clear about that.  My main need is to be able to change a bit of context 
and invoke a different execution path, without going through the full overhead 
of a function call.  XSLT, if written "naturally", tends to involve huge 
numbers of such tweak-context-and-branch operations.

> If this is some flavor of one-at-time tree-traversal algorithm, generators
> should suffice.
> 
> class TreeNode:
>     # with self.value
>     #      self.children, a list of TreeNode objects
>     ...
>     def generate_kids(self):  # pre-order traversal
>         suspend self.value
>         for kid in self.children:
>             for itskids in kid.generate_kids():
>                 suspend itskids
> 
> for k in someTreeNodeObject.generate_kids():
>     print k
> 
> So the control-flow is thoroughly natural, but you can only suspend to your
> immediate invoker (in recursive traversals, this "walks up the chain" of
> generators for each result).  With explicitly resumable generator objects,
> multiple trees (or even general graphs -- doesn't much matter) can be
> traversed in lockstep (or any other interleaving that's desired).
> 
> Now decide <wink>.

Suspending only to the invoker should do the trick because it is typically a 
single XSLT instruction that governs multiple tree-operations with varied 
context.

At IPC9, Guido put up a poll of likely use of stackless features, and it was a 
pretty clear arithmetic progression from those who wanted to use microthreads, 
to those who wanted co-routines, to those who wanted just generators.  The 
generator folks were probably 2/3 of the assembly.  Looks as if many have 
decided, and they seem to agree with you.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From greg at cosc.canterbury.ac.nz  Wed Mar 21 05:49:33 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Mar 2001 16:49:33 +1200 (NZST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>

>     def generate_kids(self):  # pre-order traversal
>         suspend self.value
>         for kid in self.children:
>             for itskids in kid.generate_kids():
>                 suspend itskids

Can I make a suggestion: If we're going to get this generator
stuff, I think it would read better if the suspending statement
were

   yield x

rather than

   suspend x

because x is not the thing that we are suspending!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From fdrake at acm.org  Wed Mar 21 05:58:10 2001
From: fdrake at acm.org (Fred L. Drake)
Date: Tue, 20 Mar 2001 23:58:10 -0500
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
Message-ID: <web-1702694@digicool.com>

Greg Ewing <greg at cosc.canterbury.ac.nz> wrote:
 > stuff, I think it would read better if the suspending
 > statement were
 > 
 >    yield x
 > 
 > rather than
 > 
 >    suspend x

  I agree; this really improves readability.  I'm sure
someone knows of precedence for the "suspend" keyword, but
the only one I recall seeing before is "yeild" (Sather).


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From nas at arctrix.com  Wed Mar 21 06:04:42 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Tue, 20 Mar 2001 21:04:42 -0800
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>; from tim.one@home.com on Tue, Mar 20, 2001 at 10:33:15PM -0500
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <20010320210442.A22819@glacier.fnational.com>

On Tue, Mar 20, 2001 at 10:33:15PM -0500, Tim Peters wrote:
> Everyone!  Run this program under current CVS:

There are probably lots of Linux testers around but here's what I
get:

    Python 2.1b2 (#2, Mar 20 2001, 23:52:29) 
    [GCC 2.95.3 20010219 (prerelease)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> x = 0.0
    >>> print "%.17g" % -x
    -0
    >>> print "%+.17g" % -x
    -0

libc is GNU 2.2.2  (if that matters).  test_coerion works for me
too.  Is test_coerce testing too much accidental implementation
behavior?

  Neil



From ping at lfw.org  Wed Mar 21 07:14:57 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 20 Mar 2001 22:14:57 -0800 (PST)
Subject: [Python-Dev] Re: Generator syntax
In-Reply-To: <web-1702694@digicool.com>
Message-ID: <Pine.LNX.4.10.10103202213070.4368-100000@skuld.kingmanhall.org>

Greg Ewing <greg at cosc.canterbury.ac.nz> wrote:
> stuff, I think it would read better if the suspending
> statement were
> 
>    yield x
> 
> rather than
> 
>    suspend x

Fred Drake wrote:
>   I agree; this really improves readability.

Indeed, shortly after i wrote my generator examples, i wished i'd
written "generate x" rather than "suspend x".  "yield x" is good too.


-- ?!ng

Happiness comes more from loving than being loved; and often when our
affection seems wounded it is only our vanity bleeding. To love, and
to be hurt often, and to love again--this is the brave and happy life.
    -- J. E. Buchrose 




From tim.one at home.com  Wed Mar 21 08:15:23 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 02:15:23 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010320210442.A22819@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEBNJHAA.tim.one@home.com>

[Neil Schemenauer, among others confirming Linux behavior]
> There are probably lots of Linux testers around but here's what I
> get:
>
>     Python 2.1b2 (#2, Mar 20 2001, 23:52:29)
>     [GCC 2.95.3 20010219 (prerelease)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 0.0
>     >>> print "%.17g" % -x
>     -0
>     >>> print "%+.17g" % -x
>     -0
>
> libc is GNU 2.2.2  (if that matters).

Indeed, libc is probably the *only* that matters (Python defers to the
platform libc for float formatting).

> test_coerion works for me too.  Is test_coerce testing too much
> accidental implementation behavior?

I don't think so.  As a later message said, Jack *should* be getting a minus
0 if and only if he's running on an IEEE-754 box (extremely likely) and set
the rounding mode to minus-infinity (extremely unlikely).

But we don't yet know what the above prints on *his* box, so still don't know
whether that's relevant.

WRT display of signed zeroes (which may or may not have something to do with
Jack's problem), Python obviously varies across platforms.  But there is no
portable way in C89 to determine the sign of a zero, so we either live with
the cross-platform discrepancies, or force zeroes on output to always be
positive (in opposition to what C99 mandates).  (Note that I reject out of
hand that we #ifdef the snot out of the code to be able to detect the sign of
a 0 on various platforms -- Python doesn't conform to any other 754 rules,
and this one is minor.)

Ah, this is coming back to me now:  at Dragon this also popped up in our C++
code.  At least one flavor of Unix there also displayed -0 as if positive.  I
fiddled our output to suppress it, a la

def output(afloat):
    if not afloat:
        afloat *= afloat  # forces -0 and +0 to +0
    print afloat

(but in C++ <wink>).

would-rather-understand-jack's-true-problem-than-cover-up-a-
   symptom-ly y'rs  - tim




From fredrik at effbot.org  Wed Mar 21 08:26:26 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Wed, 21 Mar 2001 08:26:26 +0100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
References: <web-1702694@digicool.com>
Message-ID: <012601c0b1d8$7dc3cc50$e46940d5@hagrid>

the real fred wrote:

> I agree; this really improves readability.  I'm sure someone
> knows of precedence for the "suspend" keyword

Icon

(the suspend keyword "leaves the generating function
in suspension")

> but the only one I recall seeing before is "yeild" (Sather).

I associate "yield" with non-preemptive threading (yield
to anyone else, not necessarily my caller).

Cheers /F




From tim.one at home.com  Wed Mar 21 08:25:42 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 02:25:42 -0500
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>

I also like "yield", but when talking about Icon-style generators to people
who may not be familiar with them, I'll continue to use "suspend" (since
that's the word they'll see in the Icon docs, and they can get many more
examples from the latter than from me).




From tommy at ilm.com  Wed Mar 21 08:27:12 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Tue, 20 Mar 2001 23:27:12 -0800 (PST)
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
	<LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <15032.22433.953503.130175@mace.lucasdigital.com>

I get the same ("0" then "+0") on my irix65 O2.  test_coerce succeeds
as well.


Tim Peters writes:
| Everyone!  Run this program under current CVS:
| 
| x = 0.0
| print "%.17g" % -x
| print "%+.17g" % -x
| 
| What do you get?  WinTel prints "0" for the first and "+0" for the second.
| 
| C89 doesn't define the results.
| 
| C99 requires "-0" for both (on boxes with signed floating zeroes, which is
| virtually all boxes today due to IEEE 754).
| 
| I don't want to argue the C rules, I just want to know whether this *does*
| vary across current platforms.
| 
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://mail.python.org/mailman/listinfo/python-dev



From tommy at ilm.com  Wed Mar 21 08:37:00 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Tue, 20 Mar 2001 23:37:00 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
Message-ID: <15032.22504.605383.113425@mace.lucasdigital.com>

Hey Gang,

Given the latest state of the CVS tree I am getting the following
failures on my irix65 O2 (and have been for quite some time- I'm just
now getting around to reporting them):


------------%< snip %<----------------------%< snip %<------------

test_pty
The actual stdout doesn't match the expected stdout.
This much did match (between asterisk lines):
**********************************************************************
test_pty
**********************************************************************
Then ...
We expected (repr): 'I'
But instead we got: '\n'
test test_pty failed -- Writing: '\n', expected: 'I'


importing test_pty into an interactive interpreter gives this:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import test.test_pty
Calling master_open()
Got master_fd '4', slave_name '/dev/ttyq6'
Calling slave_open('/dev/ttyq6')
Got slave_fd '5'
Writing to slave_fd

I wish to buy a fish license.For my pet fish, Eric.
calling pty.fork()
Waiting for child (16654) to finish.
Child (16654) exited with status 1024.
>>> 

------------%< snip %<----------------------%< snip %<------------

test_symtable
test test_symtable crashed -- exceptions.TypeError: unsubscriptable object


running the code test_symtable code by hand in the interpreter gives
me:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import _symtable
>>> symbols = _symtable.symtable("def f(x): return x", "?", "exec")
>>> symbols
<symtable entry global(0), line 0>
>>> symbols[0]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsubscriptable object


------------%< snip %<----------------------%< snip %<------------

test_zlib
make: *** [test] Segmentation fault (core dumped)


when I run python in a debugger and import test_zlib by hand I get
this:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import test.test_zlib
0xe5c1a120 0x43b6aa94
0xbd602f7 0xbd602f7
expecting Bad compression level
expecting Invalid initialization option
expecting Invalid initialization option
normal compression/decompression succeeded
compress/decompression obj succeeded
decompress with init options succeeded
decompressobj with init options succeeded

the faliure is on line 86 of test_zlib.py (calling obj.flush()).
here are the relevant portions of the call stack (sorry they're
stripped):

t_delete(<stripped>) ["malloc.c":801]
realfree(<stripped>) ["malloc.c":531]
cleanfree(<stripped>) ["malloc.c":944]
_realloc(<stripped>) ["malloc.c":329]
_PyString_Resize(<stripped>) ["stringobject.c":2433]
PyZlib_flush(<stripped>) ["zlibmodule.c":595]
call_object(<stripped>) ["ceval.c":2706]
...



From mal at lemburg.com  Wed Mar 21 11:02:54 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:02:54 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
References: <20010320213828.2D30F99C80@waltz.rahul.net>
Message-ID: <3AB87C4E.450723C2@lemburg.com>

Aahz Maruch wrote:
> 
> M.-A. Lemburg wrote:
> >
> > Wasn't shutil declared obsolete ?
> 
> <blink>  What?!

Guido once pronounced on this... mostly because of the comment
at the top regarding cross-platform compatibility:

"""Utility functions for copying files and directory trees.

XXX The functions here don't copy the resource fork or other metadata on Mac.

"""

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Wed Mar 21 11:41:38 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:41:38 +0100
Subject: [Python-Dev] Re: What has become of PEP224 ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com> <15030.59057.866982.538935@anthem.wooz.org>
Message-ID: <3AB88562.F6FB0042@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "GvR" == Guido van Rossum <guido at python.org> writes:
> 
>     GvR> So I see little chance for PEP 224.  Maybe I should just
>     GvR> pronounce on this, and declare the PEP rejected.
> 
> So, was that a BDFL pronouncement or not? :)

I guess so. 

I'll add Guido's comments (the ones he mailed me in
private) to the PEP and then forget about the idea of getting
doc-strings to play nice with attributes... :-(

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Wed Mar 21 11:46:01 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:46:01 +0100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>
Message-ID: <3AB88669.3FDC1DE3@lemburg.com>

Mark Hammond wrote:
> 
> OK - it appears everyone agrees we should go the "Unicode API" route.  I
> actually thought my scheme did not preclude moving to this later.
> 
> This is a much bigger can of worms than I have bandwidth to take on at the
> moment.  As Martin mentions, what will os.listdir() return on Win9x vs
> Win2k?  What does passing a Unicode object to a non-Unicode Win32 platform
> mean? etc.  How do Win95/98/ME differ in their Unicode support?  Do the
> various service packs for each of these change the basic support?
> 
> So unfortunately this simply means the status quo remains until someone
> _does_ have the time and inclination.  That may well be me in the future,
> but is not now.  It also means that until then, Python programmers will
> struggle with this and determine that they can make it work simply by
> encoding the Unicode as an "mbcs" string.  Or worse, they will note that
> "latin1 seems to work" and use that even though it will work "less often"
> than mbcs.  I was simply hoping to automate that encoding using a scheme
> that works "most often".
> 
> The biggest drawback is that by doing nothing we are _encouraging_ the user
> to write broken code.  The way things stand at the moment, the users will
> _never_ pass Unicode objects to these APIs (as they dont work) and will
> therefore manually encode a string.  To my mind this is _worse_ than what my
> scheme proposes - at least my scheme allows Unicode objects to be passed to
> the Python functions - python may choose to change the way it handles these
> in the future.  But by forcing the user to encode a string we have lost
> _all_ meaningful information about the Unicode object and can only hope they
> got the encoding right.
> 
> If anyone else decides to take this on, please let me know.  However, I fear
> that in a couple of years we may still be waiting and in the meantime people
> will be coding hacks that will _not_ work in the new scheme.

Ehm, AFAIR, the Windows CRT APIs can take MBCS character input,
so why don't we go that route first and then later switch on
to full Unicode support ?

After all, I added the "es#" parser markers because you bugged me about
wanting to use them for Windows in the MBCS context -- you even
wrote up the MBCS codec... all this code has to be good for 
something ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Wed Mar 21 12:08:34 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 21 Mar 2001 12:08:34 +0100
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>; from tim.one@home.com on Tue, Mar 20, 2001 at 10:33:15PM -0500
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <20010321120833.Q29286@xs4all.nl>

On Tue, Mar 20, 2001 at 10:33:15PM -0500, Tim Peters wrote:
> Everyone!  Run this program under current CVS:

> x = 0.0
> print "%.17g" % -x
> print "%+.17g" % -x

> What do you get?  WinTel prints "0" for the first and "+0" for the second.

On BSDI (both 4.0 (gcc 2.7.2.1) and 4.1 (egcs 1.1.2 (2.91.66)) as well as
FreeBSD 4.2 (gcc 2.95.2):

>>> x = 0.0
>>> print "%.17g" % -x
0
>>> print "%+.17g" % -x
+0

Note that neither use GNU libc even though they use gcc.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Wed Mar 21 12:31:07 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 12:31:07 +0100
Subject: [Python-Dev] Unicode and the Windows file system. 
In-Reply-To: Message by "Mark Hammond" <MarkH@ActiveState.com> ,
	     Mon, 19 Mar 2001 20:40:24 +1100 , <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com> 
Message-ID: <20010321113107.A325B36B2C1@snelboot.oratrix.nl>

> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.
> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.
> 
> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
> ascii versions of the functions means that the worst thing that can happen
> is we get a regular file-system error if an mbcs encoded string is passed on
> a non-Unicode platform.
> 
> Does anyone have any objections to this scheme or see any drawbacks in it?
> If not, I'll knock up a patch...

The Mac has a very similar problem here: unless you go to the unicode APIs 
(which is pretty much impossible for stdio calls and such at the moment) you 
have to use the "current" 8-bit encoding for filenames.

Could you put your patch in such a shape that it could easily be adapted for 
other platforms? Something like PyOS_8BitFilenameFromUnicodeObject(PyObject *, 
char *, int) or so?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From tismer at tismer.com  Wed Mar 21 13:52:05 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 21 Mar 2001 13:52:05 +0100
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <3AB8A3F5.D79F7AD8@tismer.com>


Uche Ogbuji wrote:
> 
> > [Uche Ogbuji]
> > > Quite interesting.  I brought up this *exact* point at the
> > > Stackless BOF at IPC9.  I mentioned that the immediate reason
> > > I was interested in Stackless was to supercharge the efficiency
> > > of 4XSLT.  I think that a stackless 4XSLT could pretty much
> > > annihilate the other processors in the field for performance.
> >
> > Hmm.  I'm interested in clarifying the cost/performance boundaries of the
> > various approaches.  I don't understand XSLT (I don't even know what it is).
> > Do you grok the difference between full-blown Stackless and Icon-style
> > generators?
> 
> To a decent extent, based on reading your posts carefully.
> 
> > The correspondent I quoted believed the latter were on-target
> > for XSLT work, and given the way Python works today generators are easier to
> > implement than full-blown Stackless.  But while I can speak with some
> > confidence about the latter, I don't know whether they're sufficient for what
> > you have in mind.
> 
> Based on a discussion with Christian at IPC9, they are.  I should have been
> more clear about that.  My main need is to be able to change a bit of context
> and invoke a different execution path, without going through the full overhead
> of a function call.  XSLT, if written "naturally", tends to involve huge
> numbers of such tweak-context-and-branch operations.
> 
> > If this is some flavor of one-at-time tree-traversal algorithm, generators
> > should suffice.
> >
> > class TreeNode:
> >     # with self.value
> >     #      self.children, a list of TreeNode objects
> >     ...
> >     def generate_kids(self):  # pre-order traversal
> >         suspend self.value
> >         for kid in self.children:
> >             for itskids in kid.generate_kids():
> >                 suspend itskids
> >
> > for k in someTreeNodeObject.generate_kids():
> >     print k
> >
> > So the control-flow is thoroughly natural, but you can only suspend to your
> > immediate invoker (in recursive traversals, this "walks up the chain" of
> > generators for each result).  With explicitly resumable generator objects,
> > multiple trees (or even general graphs -- doesn't much matter) can be
> > traversed in lockstep (or any other interleaving that's desired).
> >
> > Now decide <wink>.
> 
> Suspending only to the invoker should do the trick because it is typically a
> single XSLT instruction that governs multiple tree-operations with varied
> context.
> 
> At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> pretty clear arithmetic progression from those who wanted to use microthreads,
> to those who wanted co-routines, to those who wanted just generators.  The
> generator folks were probably 2/3 of the assembly.  Looks as if many have
> decided, and they seem to agree with you.

Here the exact facts of the poll:

     microthreads: 26
     co-routines:  35
     generators:   44

I think this reads a little different.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From jack at oratrix.nl  Wed Mar 21 13:57:53 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 13:57:53 +0100
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: Message by "Tim Peters" <tim.one@home.com> ,
	     Tue, 20 Mar 2001 22:33:15 -0500 , <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com> 
Message-ID: <20010321125753.9D98B36B2C1@snelboot.oratrix.nl>

> Everyone!  Run this program under current CVS:
> 
> x = 0.0
> print "%.17g" % -x
> print "%+.17g" % -x
> 
> What do you get?  WinTel prints "0" for the first and "+0" for the second.

Macintosh: -0 for both.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From thomas at xs4all.net  Wed Mar 21 14:07:04 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 21 Mar 2001 14:07:04 +0100
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.22504.605383.113425@mace.lucasdigital.com>; from tommy@ilm.com on Tue, Mar 20, 2001 at 11:37:00PM -0800
References: <15032.22504.605383.113425@mace.lucasdigital.com>
Message-ID: <20010321140704.R29286@xs4all.nl>

On Tue, Mar 20, 2001 at 11:37:00PM -0800, Flying Cougar Burnette wrote:

> ------------%< snip %<----------------------%< snip %<------------

> test_pty
> The actual stdout doesn't match the expected stdout.
> This much did match (between asterisk lines):
> **********************************************************************
> test_pty
> **********************************************************************
> Then ...
> We expected (repr): 'I'
> But instead we got: '\n'
> test test_pty failed -- Writing: '\n', expected: 'I'
> 
> 
> importing test_pty into an interactive interpreter gives this:
> 
> Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
> Type "copyright", "credits" or "license" for more information.
> >>> import test.test_pty
> Calling master_open()
> Got master_fd '4', slave_name '/dev/ttyq6'
> Calling slave_open('/dev/ttyq6')
> Got slave_fd '5'
> Writing to slave_fd
> 
> I wish to buy a fish license.For my pet fish, Eric.
> calling pty.fork()
> Waiting for child (16654) to finish.
> Child (16654) exited with status 1024.
> >>> 

Hmm. This is probably my test that is a bit gaga. It tries to test the pty
module, but since I can't find any guarantees on how pty's should work, it
probably relies on platform-specific accidents. It does the following:

---
TEST_STRING_1 = "I wish to buy a fish license."
TEST_STRING_2 = "For my pet fish, Eric."

[..]

debug("Writing to slave_fd")
os.write(slave_fd, TEST_STRING_1) # should check return value
print os.read(master_fd, 1024)

os.write(slave_fd, TEST_STRING_2[:5])
os.write(slave_fd, TEST_STRING_2[5:])
print os.read(master_fd, 1024)
---

Apparently, irix buffers the first write somewhere. Can you test if the
following works better:

---
TEST_STRING_1 = "I wish to buy a fish license.\n"
TEST_STRING_2 = "For my pet fish, Eric.\n"

[..]

debug("Writing to slave_fd")
os.write(slave_fd, TEST_STRING_1) # should check return value
sys.stdout.write(os.read(master_fd, 1024))

os.write(slave_fd, TEST_STRING_2[:5])
os.write(slave_fd, TEST_STRING_2[5:])
sys.stdout.write(os.read(master_fd, 1024))
---

(There should be no need to regenerate the output file, but if it still
fails on the same spot, try running it in verbose and see if you still have
the blank line after 'writing to slave_fd'.)

Note that the pty module is working fine, it's just the test that is screwed
up. Out of curiosity, is the test_openpty test working, or is it skipped ?

I see I also need to fix some other stuff in there, but I'll wait with that
until I hear that this works better :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Wed Mar 21 14:30:32 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 14:30:32 +0100
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: Message by Guido van Rossum <guido@digicool.com> ,
	     Tue, 20 Mar 2001 16:16:32 -0500 , <200103202116.QAA01770@cj20424-a.reston1.va.home.com> 
Message-ID: <20010321133032.9906836B2C1@snelboot.oratrix.nl>

It turns out that even simple things like 0j/2 return -0.0.

The culprit appears to be the statement
    r.imag = (a.imag - a.real*ratio) / denom;
in c_quot(), line 108.

The inner part is translated into a PPC multiply-subtract instruction
	fnmsub   fp0, fp1, fp31, fp0
Or, in other words, this computes "0.0 - (2.0 * 0.0)". The result of this is 
apparently -0.0. This sounds reasonable to me, or is this against IEEE754 
rules (or C99 rules?).

If this is all according to 754 rules the one puzzle remaining is why other 
754 platforms don't see the same thing. Could it be that the combined 
multiply-subtract skips a rounding step that separate multiply and subtract 
instructions would take? My floating point knowledge is pretty basic, so 
please enlighten me....
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From guido at digicool.com  Wed Mar 21 15:36:49 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 09:36:49 -0500
Subject: [Python-Dev] Editor sought for Quick Python Book 2nd ed.
Message-ID: <200103211436.JAA04108@cj20424-a.reston1.va.home.com>

The publisher of the Quick Python Book has approached me looking for
an editor for the second edition.  Anybody interested?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From uche.ogbuji at fourthought.com  Wed Mar 21 15:42:04 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Wed, 21 Mar 2001 07:42:04 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from Christian Tismer <tismer@tismer.com> 
   of "Wed, 21 Mar 2001 13:52:05 +0100." <3AB8A3F5.D79F7AD8@tismer.com> 
Message-ID: <200103211442.HAA21574@localhost.localdomain>

> > At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> > pretty clear arithmetic progression from those who wanted to use microthreads,
> > to those who wanted co-routines, to those who wanted just generators.  The
> > generator folks were probably 2/3 of the assembly.  Looks as if many have
> > decided, and they seem to agree with you.
> 
> Here the exact facts of the poll:
> 
>      microthreads: 26
>      co-routines:  35
>      generators:   44
> 
> I think this reads a little different.

Either you're misreading me or I'm misreading you, because your facts seem to 
*exactly* corroborate what I said.  26 -> 35 -> 44 is pretty much an 
arithmetic progression, and it's exactly in the direction I mentioned 
(microthreads -> co-routines -> generators), so what difference do you see?

Of course my 2/3 number is a guess.  60 - 70 total people in the room strikes 
my memory rightly.  Anyone else?


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From skip at pobox.com  Wed Mar 21 15:46:51 2001
From: skip at pobox.com (Skip Montanaro)
Date: Wed, 21 Mar 2001 08:46:51 -0600 (CST)
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
	<LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <15032.48859.744374.786895@beluga.mojam.com>

    Tim> Everyone!  Run this program under current CVS:
    Tim> x = 0.0
    Tim> print "%.17g" % -x
    Tim> print "%+.17g" % -x

    Tim> What do you get?

% ./python
Python 2.1b2 (#2, Mar 21 2001, 08:43:16) 
[GCC 2.95.3 19991030 (prerelease)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

% ldd ./python
        libpthread.so.0 => /lib/libpthread.so.0 (0x4001a000)
        libdl.so.2 => /lib/libdl.so.2 (0x4002d000)
        libutil.so.1 => /lib/libutil.so.1 (0x40031000)
        libm.so.6 => /lib/libm.so.6 (0x40034000)
        libc.so.6 => /lib/libc.so.6 (0x40052000)
        /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

libc appears to actually be GNU libc 2.1.3.



From tismer at tismer.com  Wed Mar 21 15:52:14 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 21 Mar 2001 15:52:14 +0100
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <200103211442.HAA21574@localhost.localdomain>
Message-ID: <3AB8C01E.867B9C5C@tismer.com>


Uche Ogbuji wrote:
> 
> > > At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> > > pretty clear arithmetic progression from those who wanted to use microthreads,
> > > to those who wanted co-routines, to those who wanted just generators.  The
> > > generator folks were probably 2/3 of the assembly.  Looks as if many have
> > > decided, and they seem to agree with you.
> >
> > Here the exact facts of the poll:
> >
> >      microthreads: 26
> >      co-routines:  35
> >      generators:   44
> >
> > I think this reads a little different.
> 
> Either you're misreading me or I'm misreading you, because your facts seem to
> *exactly* corroborate what I said.  26 -> 35 -> 44 is pretty much an
> arithmetic progression, and it's exactly in the direction I mentioned
> (microthreads -> co-routines -> generators), so what difference do you see?
> 
> Of course my 2/3 number is a guess.  60 - 70 total people in the room strikes
> my memory rightly.  Anyone else?

You are right, I was misunderstanding you. I thought 2/3rd of
all votes were in favor of generators, while my picture
is "most want generators, but the others are of comparable
interest".

sorry - ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From mwh21 at cam.ac.uk  Wed Mar 21 16:39:40 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 15:39:40 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: "Tim Peters"'s message of "Tue, 20 Mar 2001 11:01:21 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEONJGAA.tim.one@home.com>
Message-ID: <m3vgp3f7wj.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> [Michael Hudson]
> >>> Maybe you could do the check for resize *after* the call to
> >>> insertdict?  I think that would work, but I wouldn't like to go
> >>> messing with such a performance critical bit of code without some
> >>> careful thinking.
> 
> [Guido]
> >> No, that could still decide to resize, couldn't it?
> 
> [Michael]
> > Yes, but not when you're inserting on a key that is already in the
> > dictionary - because the resize would have happened when the key was
> > inserted into the dictionary, and thus the problem we're seeing here
> > wouldn't happen.
> 
> Careful:  this comment is only half the truth:
> 
> 	/* if fill >= 2/3 size, double in size */

Yes, that could be clearer.  I was confused by the distinction between
ma_used and ma_fill for a bit.

> The dictresize following is also how dicts *shrink*.  That is, build
> up a dict, delete a whole bunch of keys, and nothing at all happens
> to the size until you call setitem again (actually, I think you need
> to call it more than once -- the behavior is tricky).

Well, as I read it, if you delete a bunch of keys and then insert the
same keys again (as in pybench's SimpleDictManipulation), no resize
will happen because ma_fill will be unaffected.  A resize will only
happen if you fill up enough slots to get the 

    mp->ma_fill*3 >= mp->ma_size*2

to trigger.

> In any case, that a key is already in the dict does not guarantee
> that a dict won't resize (via shrinking) when doing a setitem.

Yes.  But I still think that the patch I posted here (the one that
checks for resize after the call to insertdict in PyDict_SetItem)
yesterday will suffice; even if you've deleted a bunch of keys,
ma_fill will be unaffected by the deletes so the size check before the
insertdict won't be triggered (becasue it wasn't by the one after the
call to insertdict in the last call to setitem) and neither will the
size check after the call to insertdict won't be triggered (because
you're inserting on a key already in the dictionary and so ma_fill
will be unchagned).  But this is mighty fragile; something more
explicit is almost certainly a good idea.

So someone should either

> bite the bullet and add a new PyDict_AdjustSize function, just
> duplicating the resize logic.  

or just put a check in PyDict_Next, or outlaw this practice and fix
the places that do it.  And then document the conclusion.  And do it
before 2.1b2 on Friday.  I'll submit a patch, unless you're very
quick.

> Delicate, though.

Uhh, I'd say so.

Cheers,
M.

-- 
 Very clever implementation techniques are required to implement this
 insanity correctly and usefully, not to mention that code written
 with this feature used and abused east and west is exceptionally
 exciting to debug.       -- Erik Naggum on Algol-style "call-by-name"




From jeremy at alum.mit.edu  Wed Mar 21 16:51:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 10:51:28 -0500 (EST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>
References: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
	<LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>
Message-ID: <15032.52736.537333.260718@w221.z064000254.bwi-md.dsl.cnc.net>

On the subject of keyword preferences, I like yield best because I
first saw iterators (Icon's generators) in CLU and CLU uses yield.

Jeremy



From jeremy at alum.mit.edu  Wed Mar 21 16:56:35 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 10:56:35 -0500 (EST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.22504.605383.113425@mace.lucasdigital.com>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
Message-ID: <15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>

The test_symtable crash is a shallow one.  There's a dependency
between a .h file and the extension module that isn't captured in the
setup.py.  I think you can delete _symtablemodule.o and rebuild -- or
do a make clean.  It should work then.

Jeremy



From tommy at ilm.com  Wed Mar 21 18:02:48 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Wed, 21 Mar 2001 09:02:48 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
	<15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15032.57011.412823.462175@mace.lucasdigital.com>

That did it.  thanks!

Jeremy Hylton writes:
| The test_symtable crash is a shallow one.  There's a dependency
| between a .h file and the extension module that isn't captured in the
| setup.py.  I think you can delete _symtablemodule.o and rebuild -- or
| do a make clean.  It should work then.
| 
| Jeremy



From tommy at ilm.com  Wed Mar 21 18:08:49 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Wed, 21 Mar 2001 09:08:49 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <20010321140704.R29286@xs4all.nl>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
	<20010321140704.R29286@xs4all.nl>
Message-ID: <15032.57243.391141.409534@mace.lucasdigital.com>

Hey Thomas,

with these changes to test_pty.py I now get:

test_pty
The actual stdout doesn't match the expected stdout.
This much did match (between asterisk lines):
**********************************************************************
test_pty
**********************************************************************
Then ...
We expected (repr): 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
But instead we got: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n'
test test_pty failed -- Writing: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n', expected: 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'

but when I import test.test_pty that blank line is gone.  Sounds like
the test verification just needs to be a bit more flexible, maybe?

test_openpty passes without a problem, BTW.



Thomas Wouters writes:
| On Tue, Mar 20, 2001 at 11:37:00PM -0800, Flying Cougar Burnette wrote:
| 
| > ------------%< snip %<----------------------%< snip %<------------
| 
| > test_pty
| > The actual stdout doesn't match the expected stdout.
| > This much did match (between asterisk lines):
| > **********************************************************************
| > test_pty
| > **********************************************************************
| > Then ...
| > We expected (repr): 'I'
| > But instead we got: '\n'
| > test test_pty failed -- Writing: '\n', expected: 'I'
| > 
| > 
| > importing test_pty into an interactive interpreter gives this:
| > 
| > Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
| > Type "copyright", "credits" or "license" for more information.
| > >>> import test.test_pty
| > Calling master_open()
| > Got master_fd '4', slave_name '/dev/ttyq6'
| > Calling slave_open('/dev/ttyq6')
| > Got slave_fd '5'
| > Writing to slave_fd
| > 
| > I wish to buy a fish license.For my pet fish, Eric.
| > calling pty.fork()
| > Waiting for child (16654) to finish.
| > Child (16654) exited with status 1024.
| > >>> 
| 
| Hmm. This is probably my test that is a bit gaga. It tries to test the pty
| module, but since I can't find any guarantees on how pty's should work, it
| probably relies on platform-specific accidents. It does the following:
| 
| ---
| TEST_STRING_1 = "I wish to buy a fish license."
| TEST_STRING_2 = "For my pet fish, Eric."
| 
| [..]
| 
| debug("Writing to slave_fd")
| os.write(slave_fd, TEST_STRING_1) # should check return value
| print os.read(master_fd, 1024)
| 
| os.write(slave_fd, TEST_STRING_2[:5])
| os.write(slave_fd, TEST_STRING_2[5:])
| print os.read(master_fd, 1024)
| ---
| 
| Apparently, irix buffers the first write somewhere. Can you test if the
| following works better:
| 
| ---
| TEST_STRING_1 = "I wish to buy a fish license.\n"
| TEST_STRING_2 = "For my pet fish, Eric.\n"
| 
| [..]
| 
| debug("Writing to slave_fd")
| os.write(slave_fd, TEST_STRING_1) # should check return value
| sys.stdout.write(os.read(master_fd, 1024))
| 
| os.write(slave_fd, TEST_STRING_2[:5])
| os.write(slave_fd, TEST_STRING_2[5:])
| sys.stdout.write(os.read(master_fd, 1024))
| ---
| 
| (There should be no need to regenerate the output file, but if it still
| fails on the same spot, try running it in verbose and see if you still have
| the blank line after 'writing to slave_fd'.)
| 
| Note that the pty module is working fine, it's just the test that is screwed
| up. Out of curiosity, is the test_openpty test working, or is it skipped ?
| 
| I see I also need to fix some other stuff in there, but I'll wait with that
| until I hear that this works better :)
| 
| -- 
| Thomas Wouters <thomas at xs4all.net>
| 
| Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From barry at digicool.com  Wed Mar 21 18:40:21 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 21 Mar 2001 12:40:21 -0500
Subject: [Python-Dev] PEP 1, PEP Purpose and Guidelines
Message-ID: <15032.59269.4520.961715@anthem.wooz.org>

With everyone feeling so PEPpy lately (yay!) I thought it was time to
do an updating pass through PEP 1.  Attached below is the latest copy,
also available (as soon as uploading is complete) via

    http://python.sourceforge.net/peps/pep-0001.html

Note the addition of the Replaces: and Replaced-By: headers for
formalizing the PEP replacement policy (thanks to Andrew Kuchling for
the idea and patch).

Enjoy,
-Barry

-------------------- snip snip --------------------
PEP: 1
Title: PEP Purpose and Guidelines
Version: $Revision: 1.16 $
Author: barry at digicool.com (Barry A. Warsaw),
    jeremy at digicool.com (Jeremy Hylton)
Status: Draft
Type: Informational
Created: 13-Jun-2000
Post-History: 21-Mar-2001


What is a PEP?

    PEP stands for Python Enhancement Proposal.  A PEP is a design
    document providing information to the Python community, or
    describing a new feature for Python.  The PEP should provide a
    concise technical specification of the feature and a rationale for
    the feature.

    We intend PEPs to be the primary mechanisms for proposing new
    features, for collecting community input on an issue, and for
    documenting the design decisions that have gone into Python.  The
    PEP author is responsible for building consensus within the
    community and documenting dissenting opinions.

    Because the PEPs are maintained as plain text files under CVS
    control, their revision history is the historical record of the
    feature proposal[1].
    

Kinds of PEPs

    There are two kinds of PEPs.  A standards track PEP describes a
    new feature or implementation for Python.  An informational PEP
    describes a Python design issue, or provides general guidelines or
    information to the Python community, but does not propose a new
    feature.


PEP Work Flow

    The PEP editor, Barry Warsaw <barry at digicool.com>, assigns numbers
    for each PEP and changes its status.

    The PEP process begins with a new idea for Python.  Each PEP must
    have a champion -- someone who writes the PEP using the style and
    format described below, shepherds the discussions in the
    appropriate forums, and attempts to build community consensus
    around the idea.  The PEP champion (a.k.a. Author) should first
    attempt to ascertain whether the idea is PEP-able.  Small
    enhancements or patches often don't need a PEP and can be injected
    into the Python development work flow with a patch submission to
    the SourceForge patch manager[2] or feature request tracker[3].

    The PEP champion then emails the PEP editor with a proposed title
    and a rough, but fleshed out, draft of the PEP.  This draft must
    be written in PEP style as described below.

    If the PEP editor approves, he will assign the PEP a number, label
    it as standards track or informational, give it status 'draft',
    and create and check-in the initial draft of the PEP.  The PEP
    editor will not unreasonably deny a PEP.  Reasons for denying PEP
    status include duplication of effort, being technically unsound,
    or not in keeping with the Python philosophy.  The BDFL
    (Benevolent Dictator for Life, Guido van Rossum
    <guido at python.org>) can be consulted during the approval phase,
    and is the final arbitrator of the draft's PEP-ability.

    The author of the PEP is then responsible for posting the PEP to
    the community forums, and marshaling community support for it.  As
    updates are necessary, the PEP author can check in new versions if
    they have CVS commit permissions, or can email new PEP versions to
    the PEP editor for committing.

    Standards track PEPs consists of two parts, a design document and
    a reference implementation.  The PEP should be reviewed and
    accepted before a reference implementation is begun, unless a
    reference implementation will aid people in studying the PEP.
    Standards Track PEPs must include an implementation - in the form
    of code, patch, or URL to same - before it can be considered
    Final.

    PEP authors are responsible for collecting community feedback on a
    PEP before submitting it for review.  A PEP that has not been
    discussed on python-list at python.org and/or python-dev at python.org
    will not be accepted.  However, wherever possible, long open-ended
    discussions on public mailing lists should be avoided.  A better
    strategy is to encourage public feedback directly to the PEP
    author, who collects and integrates the comments back into the
    PEP.

    Once the authors have completed a PEP, they must inform the PEP
    editor that it is ready for review.  PEPs are reviewed by the BDFL
    and his chosen consultants, who may accept or reject a PEP or send
    it back to the author(s) for revision.

    Once a PEP has been accepted, the reference implementation must be
    completed.  When the reference implementation is complete and
    accepted by the BDFL, the status will be changed to `Final.'

    A PEP can also be assigned status `Deferred.'  The PEP author or
    editor can assign the PEP this status when no progress is being
    made on the PEP.  Once a PEP is deferred, the PEP editor can
    re-assign it to draft status.

    A PEP can also be `Rejected'.  Perhaps after all is said and done
    it was not a good idea.  It is still important to have a record of
    this fact.

    PEPs can also be replaced by a different PEP, rendering the
    original obsolete.  This is intended for Informational PEPs, where
    version 2 of an API can replace version 1.

    PEP work flow is as follows:

        Draft -> Accepted -> Final -> Replaced
          ^
          +----> Rejected
          v
        Deferred

    Some informational PEPs may also have a status of `Active' if they
    are never meant to be completed.  E.g. PEP 1.


What belongs in a successful PEP?

    Each PEP should have the following parts:

    1. Preamble -- RFC822 style headers containing meta-data about the
       PEP, including the PEP number, a short descriptive title, the
       names contact info for each author, etc.

    2. Abstract -- a short (~200 word) description of the technical
       issue being addressed.

    3. Copyright/public domain -- Each PEP must either be explicitly
       labelled in the public domain or the Open Publication
       License[4].

    4. Specification -- The technical specification should describe
       the syntax and semantics of any new language feature.  The
       specification should be detailed enough to allow competing,
       interoperable implementations for any of the current Python
       platforms (CPython, JPython, Python .NET).

    5. Rationale -- The rationale fleshes out the specification by
       describing what motivated the design and why particular design
       decisions were made.  It should describe alternate designs that
       were considered and related work, e.g. how the feature is
       supported in other languages.

       The rationale should provide evidence of consensus within the
       community and discuss important objections or concerns raised
       during discussion.

    6. Reference Implementation -- The reference implementation must
       be completed before any PEP is given status 'Final,' but it
       need not be completed before the PEP is accepted.  It is better
       to finish the specification and rationale first and reach
       consensus on it before writing code.

       The final implementation must include test code and
       documentation appropriate for either the Python language
       reference or the standard library reference.


PEP Style

    PEPs are written in plain ASCII text, and should adhere to a
    rigid style.  There is a Python script that parses this style and
    converts the plain text PEP to HTML for viewing on the web[5].

    Each PEP must begin with an RFC822 style header preamble.  The
    headers must appear in the following order.  Headers marked with
    `*' are optional and are described below.  All other headers are
    required.

        PEP: <pep number>
        Title: <pep title>
        Version: <cvs version string>
        Author: <list of authors' email and real name>
      * Discussions-To: <email address>
        Status: <Draft | Active | Accepted | Deferred | Final | Replaced>
        Type: <Informational | Standards Track>
        Created: <date created on, in dd-mmm-yyyy format>
      * Python-Version: <version number>
        Post-History: <dates of postings to python-list and python-dev>
      * Replaces: <pep number>
      * Replaced-By: <pep number>

    Standards track PEPs must have a Python-Version: header which
    indicates the version of Python that the feature will be released
    with.  Informational PEPs do not need a Python-Version: header.

    While a PEP is in private discussions (usually during the initial
    Draft phase), a Discussions-To: header will indicate the mailing
    list or URL where the PEP is being discussed.  No Discussions-To:
    header is necessary if the PEP is being discussed privately with
    the author, or on the python-list or python-dev email mailing
    lists.

    PEPs may also have a Replaced-By: header indicating that a PEP has
    been rendered obsolete by a later document; the value is the
    number of the PEP that replaces the current document.  The newer
    PEP must have a Replaces: header containing the number of the PEP
    that it rendered obsolete.

    PEP headings must begin in column zero and the initial letter of
    each word must be capitalized as in book titles.  Acronyms should
    be in all capitals.  The body of each section must be indented 4
    spaces.  Code samples inside body sections should be indented a
    further 4 spaces, and other indentation can be used as required to
    make the text readable.  You must use two blank lines between the
    last line of a section's body and the next section heading.

    Tab characters must never appear in the document at all.  A PEP
    should include the Emacs stanza included by example in this PEP.

    A PEP must contain a Copyright section, and it is strongly
    recommended to put the PEP in the public domain.

    You should footnote any URLs in the body of the PEP, and a PEP
    should include a References section with those URLs expanded.


References and Footnotes

    [1] This historical record is available by the normal CVS commands
    for retrieving older revisions.  For those without direct access
    to the CVS tree, you can browse the current and past PEP revisions
    via the SourceForge web site at

    http://cvs.sourceforge.net/cgi-bin/cvsweb.cgi/python/nondist/peps/?cvsroot=python

    [2] http://sourceforge.net/tracker/?group_id=5470&atid=305470

    [3] http://sourceforge.net/tracker/?atid=355470&group_id=5470&func=browse

    [4] http://www.opencontent.org/openpub/

    [5] The script referred to here is pep2html.py, which lives in
    the same directory in the CVS tree as the PEPs themselves.  Try
    "pep2html.py --help" for details.

    The URL for viewing PEPs on the web is
    http://python.sourceforge.net/peps/


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:



From m.favas at per.dem.csiro.au  Wed Mar 21 20:44:30 2001
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 22 Mar 2001 03:44:30 +0800
Subject: [Python-Dev] test_coercion failing
Message-ID: <3AB9049E.7331F570@per.dem.csiro.au>

[Tim searches for -0's]
On Tru64 Unix (4.0F) with Compaq's C compiler I get:
Python 2.1b2 (#344, Mar 22 2001, 03:18:25) [C] on osf1V4
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

and on Solaris 8 (Sparc) with gcc I get:
Python 2.1b2 (#23, Mar 22 2001, 03:25:27) 
[GCC 2.95.2 19991024 (release)] on sunos5
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

while on FreeBSD 4.2 with gcc I get:
Python 2.1b2 (#3, Mar 22 2001, 03:36:19) 
[GCC 2.95.2 19991024 (release)] on freebsd4
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
0
>>> print "%+.17g" % -x
+0

-- 
Mark Favas  -   m.favas at per.dem.csiro.au
CSIRO, Private Bag No 5, Wembley, Western Australia 6913, AUSTRALIA



From tim.one at home.com  Wed Mar 21 21:18:54 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 15:18:54 -0500
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: <20010321133032.9906836B2C1@snelboot.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEOJHAA.tim.one@home.com>

[Jack Jansen]
> It turns out that even simple things like 0j/2 return -0.0.
>
> The culprit appears to be the statement
>     r.imag = (a.imag - a.real*ratio) / denom;
> in c_quot(), line 108.
>
> The inner part is translated into a PPC multiply-subtract instruction
> 	fnmsub   fp0, fp1, fp31, fp0
> Or, in other words, this computes "0.0 - (2.0 * 0.0)". The result
> of this is apparently -0.0. This sounds reasonable to me, or is
> this against IEEE754 rules (or C99 rules?).

I've said it twice, but I'll say it once more <wink>:  under 754 rules,

   (+0) - (+0)

must return +0 in all rounding modes except for (the exceedingly unlikely, as
it's not the default) to-minus-infinity rounding mode.  The latter case is
the only case in which it should return -0.  Under the default
to-nearest/even rounding mode, and under the to-plus-infinity and to-0
rounding modes, +0 is the required result.

However, we don't know whether a.imag is +0 or -0 on your box; it *should* be
+0.  If it were -0, then

   (-0) - (+0)

should indeed be -0 under default 754 rules.  So this still needs to be
traced back.  That is, when you say it computes ""0.0 - (2.0 * 0.0)", there
are four *possible* things that could mean, depending on the signs of the
zeroes.  As is, I'm afraid we still don't know enough to say whether the -0
result is due to an unexpected -0 as one the inputs.

> If this is all according to 754 rules the one puzzle remaining is
> why other 754 platforms don't see the same thing.

Because the antecedent is wrong:  the behavior you're seeing violates 754
rules (unless you've somehow managed to ask for to-minus-infinity rounding,
or you're getting -0 inputs for bogus reasons).

Try this:

    print repr(1.0 - 1e-100)

If that doesn't display "1.0", but something starting "0.9999"..., then
you've somehow managed to get to-minus-infinity rounding.

Another thing to try:

    print 2+0j

Does that also come out as "2-0j" for you?

What about:

    print repr((0j).real), repr((0j).imag)

?  (I'm trying to see whether -0 parts somehow get invented out of thin air.)

> Could it be that the combined multiply-subtract skips a rounding
> step that separate multiply and subtract instructions would take? My
> floating point knowledge is pretty basic, so please enlighten me....

I doubt this has anything to do with the fused mul-sub.  That operation isn't
defined as such by 754, but it would be a mondo serious hardware bug if it
didn't operate on endcase values the same way as separate mul-then-sub.
OTOH, the new complex division algorithm may generate a fused mul-sub in
places where the old algorithm did not, so I can't rule that out either.

BTW, most compilers for boxes with fused mul-add have a switch to disable
generating the fused instructions.  Might want to give that a try (if you
have such a switch, it may mask the symptom but leave the cause unknown).




From tim.one at home.com  Wed Mar 21 21:45:09 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 15:45:09 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
Message-ID: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>

When running the full test suite, test_doctest fails (in current CVS; did not
fail yesterday).  This was on Windows.  Other platforms?

Does not fail in isolation.  Doesn't matter whether or not .pyc files are
deleted first, and doesn't matter whether a regular or debug build of Python
is used.

In four runs of the full suite with regrtest -r (randomize test order),
test_doctest failed twice and passed twice.  So it's unlikely this has
something specifically to do with doctest.

roll-out-the-efence?-ly y'rs  - tim




From jeremy at alum.mit.edu  Wed Mar 21 21:41:53 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 15:41:53 -0500 (EST)
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
Message-ID: <15033.4625.822632.276247@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "TP" == Tim Peters <tim.one at home.com> writes:

  TP> In four runs of the full suite with regrtest -r (randomize test
  TP> order), test_doctest failed twice and passed twice.  So it's
  TP> unlikely this has something specifically to do with doctest.

How does doctest fail?  Does that give any indication of the nature of
the problem?  Does it fail with a core dump (or whatever Windows does
instead)?  Or is the output wrong?

Jeremy



From guido at digicool.com  Wed Mar 21 22:01:12 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 16:01:12 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Your message of "Wed, 21 Mar 2001 15:45:09 EST."
             <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com> 
Message-ID: <200103212101.QAA11781@cj20424-a.reston1.va.home.com>

> When running the full test suite, test_doctest fails (in current CVS; did not
> fail yesterday).  This was on Windows.  Other platforms?
> 
> Does not fail in isolation.  Doesn't matter whether or not .pyc files are
> deleted first, and doesn't matter whether a regular or debug build of Python
> is used.
> 
> In four runs of the full suite with regrtest -r (randomize test order),
> test_doctest failed twice and passed twice.  So it's unlikely this has
> something specifically to do with doctest.

Last time we had something like this it was a specific dependency
between two test modules, where if test_A was imported before test_B,
things were fine, but in the other order one of them would fail.

I noticed that someone (Jeremy?) checked in a whole slew of changes to
test modules, including test_support.  I also noticed that stuff was
added to test_support that would show up if you did "from test_support
import *".  I believe previously this was intended to only export a
small number of things; now it exports more, e.g. unittest, os, and
sys.  But that doesn't look like it would make much of a difference.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Wed Mar 21 22:03:40 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 21:03:40 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
Message-ID: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> When running the full test suite, test_doctest fails (in current CVS; did not
> fail yesterday).  This was on Windows.  Other platforms?

Yes.  Linux.

I'm getting:

We expected (repr): 'doctest.Tester.runstring.__doc__'
But instead we got: 'doctest.Tester.summarize.__doc__'

> Does not fail in isolation.  

Indeed.

How does doctest order it's tests?  I bet the changes just made to
dictobject.c make the order of dict.items() slightly unpredictable
(groan).

Cheers,
M.

-- 
81. In computing, turning the obvious into the useful is a living
    definition of the word "frustration".
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From jeremy at alum.mit.edu  Wed Mar 21 21:54:05 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 15:54:05 -0500 (EST)
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
	<m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15033.5357.471974.18878@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:

  MWH> "Tim Peters" <tim.one at home.com> writes:
  >> When running the full test suite, test_doctest fails (in current
  >> CVS; did not fail yesterday).  This was on Windows.  Other
  >> platforms?

  MWH> Yes.  Linux.

Interesting.  I've done four runs (-r) and not seen any errors on my
Linux box.  Maybe I'm just unlucky.

Jeremy



From tim.one at home.com  Wed Mar 21 22:13:14 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 16:13:14 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <15033.4625.822632.276247@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFEJHAA.tim.one@home.com>

[Jeremy]
> How does doctest fail?  Does that give any indication of the nature of
> the problem?  Does it fail with a core dump (or whatever Windows does
> instead)?  Or is the output wrong?

Sorry, I should know better than to say "doesn't work".  It's that the output
is wrong:

It's good up through the end of this section of output:

...
1 items had failures:
   1 of   2 in XYZ
4 tests in 2 items.
3 passed and 1 failed.
***Test Failed*** 1 failures.
(1, 4)
ok
0 of 6 examples failed in doctest.Tester.__doc__
Running doctest.Tester.__init__.__doc__
0 of 0 examples failed in doctest.Tester.__init__.__doc__
Running doctest.Tester.run__test__.__doc__
0 of 0 examples failed in doctest.Tester.run__test__.__doc__
Running


But then:

We expected (repr): 'doctest.Tester.runstring.__doc__'
But instead we got: 'doctest.Tester.summarize.__doc__'


Hmm!  Perhaps doctest is merely running sub-tests in a different order.
doctest uses whatever order dict.items() returns (for the module __dict__ and
class __dict__s, etc).  It should probably force the order.  I'm going to get
something to eat and ponder that ... if true, The Mystery is how the internal
dicts could get *built* in a different order across runs ...

BTW, does or doesn't a run of the full test suite complain here too under
your Linux box?




From tim.one at home.com  Wed Mar 21 22:17:39 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 16:17:39 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEFFJHAA.tim.one@home.com>

[Michael Hudson]
> Yes.  Linux.
>
> I'm getting:
>
> We expected (repr): 'doctest.Tester.runstring.__doc__'
> But instead we got: 'doctest.Tester.summarize.__doc__'

Same thing, then (Jeremy, *don't* use -r).

>> Does not fail in isolation.

> Indeed.

> How does doctest order it's tests?  I bet the changes just made to
> dictobject.c make the order of dict.items() slightly unpredictable
> (groan).

As just posted, doctest uses whatever .items() returns but probably
shouldn't.  It's hard to see how the dictobject.c changes could affect that,
but I have to agree they're the most likley suspect.  I'll back those out
locally and see whether the problem persists.

But I'm going to eat first!




From michel at digicool.com  Wed Mar 21 22:44:29 2001
From: michel at digicool.com (Michel Pelletier)
Date: Wed, 21 Mar 2001 13:44:29 -0800 (PST)
Subject: [Python-Dev] PEP 245: Python Interfaces
Message-ID: <Pine.LNX.4.32.0103211340050.25303-100000@localhost.localdomain>

Barry has just checked in PEP 245 for me.

http://python.sourceforge.net/peps/pep-0245.html

I'd like to open up the discussion phase on this PEP to anyone who is
interested in commenting on it.  I'm not sure of the proper forum, it has
been discussed to some degree on the types-sig.

Thanks,

-Michel




From mwh21 at cam.ac.uk  Wed Mar 21 23:01:15 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 22:01:15 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
References: <LNBBLJKPBEHFEDALKOLCOEFFJHAA.tim.one@home.com>
Message-ID: <m3elvqg4t0.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> [Michael Hudson]
> > Yes.  Linux.
> >
> > I'm getting:
> >
> > We expected (repr): 'doctest.Tester.runstring.__doc__'
> > But instead we got: 'doctest.Tester.summarize.__doc__'
> 
> Same thing, then (Jeremy, *don't* use -r).
> 
> >> Does not fail in isolation.
> 
> > Indeed.
> 
> > How does doctest order it's tests?  I bet the changes just made to
> > dictobject.c make the order of dict.items() slightly unpredictable
> > (groan).
> 
> As just posted, doctest uses whatever .items() returns but probably
> shouldn't.  It's hard to see how the dictobject.c changes could
> affect that, but I have to agree they're the most likley suspect.

> I'll back those out locally and see whether the problem persists.

Fixes things here.

Oooh, look at this:

$ ../../python 
Python 2.1b2 (#3, Mar 21 2001, 21:29:14) 
[GCC 2.95.1 19990816/Linux (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import doctest
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', '_Tester__record_outcome', 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge', 'rundoc', '__module__']
>>> doctest.testmod(doctest)
(0, 53)
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', 'summarize', '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc', '_Tester__record_outcome', '__module__']

Indeed:

$ ../../python 
Python 2.1b2 (#3, Mar 21 2001, 21:29:14) 
[GCC 2.95.1 19990816/Linux (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import doctest
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', '_Tester__record_outcome', 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge', 'rundoc', '__module__']
>>> doctest.Tester.__dict__['__doc__'] = doctest.Tester.__dict__['__doc__']
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', 'summarize', '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc', '_Tester__record_outcome', '__module__']

BUT, and this is where I give up:

    This has always happened!  It even happens with Python 1.5.2!

it just makes a difference now.  So maybe it's something else entirely.

Cheers,
M.

-- 
  MARVIN:  Do you want me to sit in a corner and rust, or just fall
           apart where I'm standing?
                    -- The Hitch-Hikers Guide to the Galaxy, Episode 2




From tim.one at home.com  Wed Mar 21 23:30:52 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 17:30:52 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3elvqg4t0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>

[Michael Hudson]
> Oooh, look at this:
>
> $ ../../python
> Python 2.1b2 (#3, Mar 21 2001, 21:29:14)
> [GCC 2.95.1 19990816/Linux (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import doctest
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', '_Tester__record_outcome',
> 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge',
> 'rundoc', '__module__']
> >>> doctest.testmod(doctest)
> (0, 53)
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', 'summarize',
> '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc',
> '_Tester__record_outcome', '__module__']

Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
since the dict has 11 items, it's exactly at the boundary where PyDict_Next
will now resize it.

> Indeed:
>
> $ ../../python
> Python 2.1b2 (#3, Mar 21 2001, 21:29:14)
> [GCC 2.95.1 19990816/Linux (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import doctest
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', '_Tester__record_outcome',
> 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge',
> 'rundoc', '__module__']
> >>> doctest.Tester.__dict__['__doc__'] = doctest.Tester.__dict__['__doc__']
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', 'summarize',
> '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc',
> '_Tester__record_outcome', '__module__']
>
> BUT, and this is where I give up:
>
>     This has always happened!  It even happens with Python 1.5.2!

Yes, but in this case you did an explicit setitem, and PyDict_SetItem *will*
resize it (because it started with 11 entries:  11*3 >= 16*2, but 10*3 <
16*2).  Nothing has changed there in many years.

> it just makes a difference now.  So maybe it's something else entirely.

Well, nobody should rely on the order of dict.items().  Curiously, doctest
actually doesn't, but the order of its verbose-mode *output* blocks changes,
and it's the regrtest.py framework that cares about that.

I'm calling this one a bug in doctest.py, and will fix it there.  Ugly:
since we can longer rely on list.sort() not raising exceptions, it won't be
enough to replace the existing

    for k, v in dict.items():

with

    items = dict.items()
    items.sort()
    for k, v in items:

I guess

    keys = dict.keys()
    keys.sort()
    for k in keys:
        v = dict[k]

is the easiest safe alternative (these are namespace dicts, btw, so it's
certain the keys are all strings).

thanks-for-the-help!-ly y'rs  - tim




From guido at digicool.com  Wed Mar 21 23:36:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 17:36:13 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Your message of "Wed, 21 Mar 2001 17:30:52 EST."
             <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com> 
Message-ID: <200103212236.RAA12977@cj20424-a.reston1.va.home.com>

> Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
> since the dict has 11 items, it's exactly at the boundary where PyDict_Next
> will now resize it.

It *could* be the garbage collector.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Thu Mar 22 00:24:33 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 23:24:33 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Guido van Rossum's message of "Wed, 21 Mar 2001 17:36:13 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com> <200103212236.RAA12977@cj20424-a.reston1.va.home.com>
Message-ID: <m3ae6eg0y6.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> > Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
> > since the dict has 11 items, it's exactly at the boundary where PyDict_Next
> > will now resize it.
> 
> It *could* be the garbage collector.

I think it would have to be; there just aren't that many calls to
PyDict_Next around.  I confused myself by thinking that calling keys()
called PyDict_Next, but it doesn't.

glad-that-one's-sorted-out-ly y'rs
M.

-- 
  "The future" has arrived but they forgot to update the docs.
                                        -- R. David Murray, 9 May 2000




From greg at cosc.canterbury.ac.nz  Thu Mar 22 02:37:00 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Mar 2001 13:37:00 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <3AB87C4E.450723C2@lemburg.com>
Message-ID: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal at lemburg.com>:

> XXX The functions here don't copy the resource fork or other metadata on Mac.

Wouldn't it be better to fix these functions on the Mac
instead of depriving everyone else of them?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Mar 22 02:39:05 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Mar 2001 13:39:05 +1200 (NZST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <012601c0b1d8$7dc3cc50$e46940d5@hagrid>
Message-ID: <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <fredrik at effbot.org>:

> I associate "yield" with non-preemptive threading (yield
> to anyone else, not necessarily my caller).

Well, this flavour of generators is sort of a special case
subset of non-preemptive threading, so the usage is not
entirely inconsistent.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Thu Mar 22 02:41:02 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 20:41:02 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <15032.22433.953503.130175@mace.lucasdigital.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEGKJHAA.tim.one@home.com>

[Flying Cougar Burnette]
> I get the same ("0" then "+0") on my irix65 O2.  test_coerce succeeds
> as well.

Tommy, it's great to hear that Irix screws up signed-zero output too!  The
two computer companies I own stock in are SGI and Microsoft.  I'm sure this
isn't a coincidence <wink>.

i'll-use-linux-when-it-gets-rid-of-those-damn-sign-bits-ly y'rs  - tim




From represearch at yahoo.com  Wed Mar 21 19:46:00 2001
From: represearch at yahoo.com (reptile research)
Date: Wed, 21 Mar 2001 19:46:00
Subject: [Python-Dev] (no subject)
Message-ID: <E14fu8l-0000lc-00@mail.python.org>



From nhodgson at bigpond.net.au  Thu Mar 22 03:07:28 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Thu, 22 Mar 2001 13:07:28 +1100
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
References: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
Message-ID: <034601c0b274$d8bab8c0$8119fea9@neil>

Greg Ewing:
> "M.-A. Lemburg" <mal at lemburg.com>:
>
> > XXX The functions here don't copy the resource fork or other metadata on
Mac.
>
> Wouldn't it be better to fix these functions on the Mac
> instead of depriving everyone else of them?

   Then they should be fixed for Windows as well where they don't copy
secondary forks either. While not used much by native code, forks are
commonly used on NT servers which serve files to Macintoshes.

   There is also the issue of other metadata. Should shutil optionally copy
ownership information? Access Control Lists? Summary information? A really
well designed module here could be very useful but quite some work.

   Neil




From nhodgson at bigpond.net.au  Thu Mar 22 03:14:22 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Thu, 22 Mar 2001 13:14:22 +1100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
References: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz><LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com> <15032.52736.537333.260718@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <035801c0b275$cf667510$8119fea9@neil>

Jeremy Hylton:

> On the subject of keyword preferences, I like yield best because I
> first saw iterators (Icon's generators) in CLU and CLU uses yield.

   For me the benefit of "yield" is that it connotes both transfer of value
and transfer of control, just like "return", while "suspend" only connotes
transfer of control.

   "This tree yields 20 Kilos of fruit each year" and "When merging, yield
to the vehicles to your right".

   Neil




From barry at digicool.com  Thu Mar 22 04:16:30 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 21 Mar 2001 22:16:30 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
References: <3AB87C4E.450723C2@lemburg.com>
	<200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
Message-ID: <15033.28302.876972.730118@anthem.wooz.org>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Wouldn't it be better to fix these functions on the Mac
    GE> instead of depriving everyone else of them?

Either way, shutil sure is useful!



From MarkH at ActiveState.com  Thu Mar 22 06:16:09 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 22 Mar 2001 16:16:09 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPOEKKDGAA.MarkH@ActiveState.com>

I have submitted patch #410465 for this.

http://sourceforge.net/tracker/?func=detail&aid=410465&group_id=5470&atid=30
5470

Comments are in the patch, so I won't repeat them here, but I would
appreciate a few reviews on the code.  Particularly, my addition of a new
format to PyArg_ParseTuple and the resulting extra string copy may raise a
few eye-brows.

I've even managed to include the new test file and its output in the patch,
so it will hopefully apply cleanly and run a full test if you want to try
it.

Thanks,

Mark.




From nas at arctrix.com  Thu Mar 22 06:44:32 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Wed, 21 Mar 2001 21:44:32 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Mar 20, 2001 at 01:31:49AM -0500
References: <20010319084534.A18938@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>
Message-ID: <20010321214432.A25810@glacier.fnational.com>

[Tim on comparing fringes of two trees]:
> In Icon you need to create co-expressions to solve this
> problem, because its generators aren't explicitly resumable,
> and Icon has no way to spell "kick a pair of generators in
> lockstep".  But explicitly resumable generators are in fact
> "good enough" for this classic example, which is usually used
> to motivate coroutines.

Apparently they are good for lots of other things too.  Tonight I
implemented passing values using resume().  Next, I decided to
see if I had enough magic juice to tackle the coroutine example
from Gordon's stackless tutorial.  Its turns out that I didn't
need the extra functionality.  Generators are enough.

The code is not too long so I've attached it.  I figure that some
people might need a break from 2.1 release issues.  I think the
generator version is even simpler than the coroutine version.

  Neil

# Generator example:
# The program is a variation of a Simula 67 program due to Dahl & Hoare,
# who in turn credit the original example to Conway.
#
# We have a number of input lines, terminated by a 0 byte.  The problem
# is to squash them together into output lines containing 72 characters
# each.  A semicolon must be added between input lines.  Runs of blanks
# and tabs in input lines must be squashed into single blanks.
# Occurrences of "**" in input lines must be replaced by "^".
#
# Here's a test case:

test = """\
   d    =   sqrt(b**2  -  4*a*c)
twoa    =   2*a
   L    =   -b/twoa
   R    =   d/twoa
  A1    =   L + R
  A2    =   L - R\0
"""

# The program should print:
# d = sqrt(b^2 - 4*a*c);twoa = 2*a; L = -b/twoa; R = d/twoa; A1 = L + R;
#A2 = L - R
#done
# getlines: delivers the input lines
# disassemble: takes input line and delivers them one
#    character at a time, also inserting a semicolon into
#    the stream between lines
# squasher:  takes characters and passes them on, first replacing
#    "**" with "^" and squashing runs of whitespace
# assembler: takes characters and packs them into lines with 72
#    character each; when it sees a null byte, passes the last
#    line to putline and then kills all the coroutines

from Generator import Generator

def getlines(text):
    g = Generator()
    for line in text.split('\n'):
        g.suspend(line)
    g.end()

def disassemble(cards):
    g = Generator()
    try:
        for card in cards:
            for i in range(len(card)):
                if card[i] == '\0':
                    raise EOFError 
                g.suspend(card[i])
            g.suspend(';')
    except EOFError:
        pass
    while 1:
        g.suspend('') # infinite stream, handy for squash()

def squash(chars):
    g = Generator()
    while 1:
        c = chars.next()
        if not c:
            break
        if c == '*':
            c2 = chars.next()
            if c2 == '*':
                c = '^'
            else:
                g.suspend(c)
                c = c2
        if c in ' \t':
            while 1:
                c2 = chars.next()
                if c2 not in ' \t':
                    break
            g.suspend(' ')
            c = c2
        if c == '\0':
            g.end()
        g.suspend(c)
    g.end()

def assemble(chars):
    g = Generator()
    line = ''
    for c in chars:
        if c == '\0':
            g.end()
        if len(line) == 72:
            g.suspend(line)
            line = ''
        line = line + c
    line = line + ' '*(72 - len(line))
    g.suspend(line)
    g.end()


if __name__ == '__main__':
    for line in assemble(squash(disassemble(getlines(test)))):
        print line
    print 'done'

        



From cce at clarkevans.com  Thu Mar 22 11:14:25 2001
From: cce at clarkevans.com (Clark C. Evans)
Date: Thu, 22 Mar 2001 05:14:25 -0500 (EST)
Subject: [Python-Dev] Re: PEP 1, PEP Purpose and Guidelines
In-Reply-To: <15032.59269.4520.961715@anthem.wooz.org>
Message-ID: <Pine.LNX.4.21.0103220504280.18700-100000@clarkevans.com>

Barry,

  If you don't mind, I'd like to apply for one of them
  there PEP numbers.  Sorry for not following the guidelines,
  it won't happen again.

  Also, I believe that this isn't just my work, but rather
  a first pass at concensus on this issue via the vocal and
  silent feeback from those on the main and type special
  interest group.  I hope that I have done their ideas
  and feedback justice (if not, I'm sure I'll hear about it).

Thank you so much,

Clark

...

PEP: XXX
Title: Protocol Checking and Adaptation
Version: $Revision$
Author: Clark Evans
Python-Version: 2.2
Status: Draft
Type: Standards Track
Created: 21-Mar-2001
Updated: 23-Mar-2001

Abstract

    This proposal puts forth a built-in, explicit method for
    the adaptation (including verification) of an object to a 
    context where a specific type, class, interface, or other 
    protocol is expected.  This proposal can leverage existing
    protocols such as the type system and class hierarchy and is
    orthogonal, if not complementary to the pending interface
    mechanism [1] and signature based type-checking system [2]

    This proposal allows an object to answer two questions.  First,
    are you a such and such?  Meaning, does this object have a 
    particular required behavior?  And second, if not, can you give
    me a handle which is?  Meaning, can the object construct an 
    appropriate wrapper object which can provide compliance with
    the protocol expected.  This proposal does not limit what 
    such and such (the protocol) is or what compliance to that
    protocol means, and it allows other query/adapter techniques 
    to be added later and utilized through the same interface 
    and infrastructure introduced here.

Motivation

    Currently there is no standardized mechanism in Python for 
    asking if an object supports a particular protocol. Typically,
    existence of particular methods, particularly those that are 
    built-in such as __getitem__, is used as an indicator of 
    support for a particular protocol.  This technique works for 
    protocols blessed by GvR, such as the new enumerator proposal
    identified by a new built-in __iter__.  However, this technique
    does not admit an infallible way to identify interfaces lacking 
    a unique, built-in signature method.

    More so, there is no standardized way to obtain an adapter 
    for an object.  Typically, with objects passed to a context
    expecting a particular protocol, either the object knows about 
    the context and provides its own wrapper or the context knows 
    about the object and automatically wraps it appropriately.  The 
    problem with this approach is that such adaptations are one-offs,
    are not centralized in a single place of the users code, and 
    are not executed with a common technique, etc.  This lack of
    standardization increases code duplication with the same 
    adapter occurring in more than one place or it encourages 
    classes to be re-written instead of adapted.  In both cases,
    maintainability suffers.

    In the recent type special interest group discussion [3], there
    were two complementary quotes which motivated this proposal:

       "The deep(er) part is whether the object passed in thinks of
        itself as implementing the Foo interface. This means that
        its author has (presumably) spent at least a little time
        about the invariants that a Foo should obey."  GvR [4]

    and

       "There is no concept of asking an object which interface it
        implements. There is no "the" interface it implements. It's
        not even a set of interfaces, because the object doesn't 
        know them in advance. Interfaces can be defined after objects
        conforming to them are created." -- Marcin Kowalczyk [5]

    The first quote focuses on the intent of a class, including 
    not only the existence of particular methods, but more 
    importantly the call sequence, behavior, and other invariants.
    Where the second quote focuses on the type signature of the
    class.  These quotes highlight a distinction between interface
    as a "declarative, I am a such-and-such" construct, as opposed
    to a "descriptive, It looks like a such-and-such" mechanism.

    Four positive cases for code-reuse include:

     a) It is obvious object has the same protocol that
        the context expects.  This occurs when the type or
        class expected happens to be the type of the object
        or class.  This is the simple and easiest case.

     b) When the object knows about the protocol that the
        context requires and knows how to adapt itself 
        appropriately.  Perhaps it already has the methods
        required, or it can make an appropriate wrapper

     c) When the protocol knows about the object and can
        adapt it on behalf of the context.  This is often
        the case with backwards-compatibility cases.

     d) When the context knows about the object and the 
        protocol and knows how to adapt the object so that
        the required protocol is satisfied.

    This proposal should allow each of these cases to be handled,
    however, the proposal only concentrates on the first two cases;
    leaving the latter two cases where the protocol adapts the 
    object and where the context adapts the object to other proposals.
    Furthermore, this proposal attempts to enable these four cases
    in a manner completely neutral to type checking or interface
    declaration and enforcement proposals.  

Specification

    For the purposes of this specification, let the word protocol
    signify any current or future method of stating requirements of 
    an object be it through type checking, class membership, interface 
    examination, explicit types, etc.  Also let the word compliance
    be dependent and defined by each specific protocol.

    This proposal initially supports one initial protocol, the
    type/class membership as defined by isinstance(object,protocol)
    Other types of protocols, such as interfaces can be added through
    another proposal without loss of generality of this proposal.  
    This proposal attempts to keep the first set of protocols small
    and relatively unobjectionable.

    This proposal would introduce a new binary operator "isa".
    The left hand side of this operator is the object to be checked
    ("self"), and the right hand side is the protocol to check this
    object against ("protocol").  The return value of the operator 
    will be either the left hand side if the object complies with 
    the protocol or None.

    Given an object and a protocol, the adaptation of the object is:
     a) self, if the object is already compliant with the protocol,
     b) a secondary object ("wrapper"), which provides a view of the
        object compliant with the protocol.  This is explicitly 
        vague, and wrappers are allowed to maintain their own 
        state as necessary.
     c) None, if the protocol is not understood, or if object 
        cannot be verified compliant with the protocol and/or
        if an appropriate wrapper cannot be constructed.

    Further, a new built-in function, adapt, is introduced.  This
    function takes two arguments, the object being adapted ("obj") 
    and the protocol requested of the object ("protocol").  This
    function returns the adaptation of the object for the protocol,
    either self, a wrapper, or None depending upon the circumstances.
    None may be returned if adapt does not understand the protocol,
    or if adapt cannot verify compliance or create a wrapper.

    For this machinery to work, two other components are required.
    First is a private, shared implementation of the adapt function
    and isa operator.  This private routine will have three 
    arguments: the object being adapted ("self"), the protocol 
    requested ("protocol"), and a flag ("can_wrap").  The flag
    specifies if the adaptation may be a wrapper, if the flag is not
    set, then the adaptation may only be self or None.  This flag is
    required to support the isa operator.  The obvious case 
    mentioned in the motivation, where the object easily complies 
    with the protocol, is implemented in this private routine.  

    To enable the second case mentioned in the motivation, when 
    the object knows about the protocol, a new method slot, __adapt__
    on each object is required.  This optional slot takes three
    arguments, the object being adapted ("self"), the protocol 
    requested ("protocol"), and a flag ("can_wrap").  And, like 
    the other functions, must return an adaptation, be it self, a
    wrapper if allowed, or None.  This method slot allows a class 
    to declare which protocols it supports in addition to those 
    which are part of the obvious case.

    This slot is called first before the obvious cases are examined, 
    if None is returned then the default processing proceeds.  If the
    default processing is wrong, then the AdaptForceNoneException
    can be thrown.  The private routine will catch this specific 
    exception and return None in this case.  This technique allows an
    class to subclass another class, but yet catch the cases where 
    it is considered as a substitutable for the base class.  Since 
    this is the exception, rather than the normal case, an exception 
    is warranted and is used to pass this information along.  The 
    caller of adapt or isa will be unaware of this particular exception
    as the private routine will return None in this particular case.

    Please note two important things.  First, this proposal does not
    preclude the addition of other protocols.  Second, this proposal 
    does not preclude other possible cases where adapter pattern may
    hold, such as the protocol knowing the object or the context 
    knowing the object and the protocol (cases c and d in the 
    motivation).  In fact, this proposal opens the gate for these 
    other mechanisms to be added; while keeping the change in 
    manageable chunks.

Reference Implementation and Example Usage

    -----------------------------------------------------------------
    adapter.py
    -----------------------------------------------------------------
        import types
        AdaptForceNoneException = "(private error for adapt and isa)"

        def interal_adapt(obj,protocol,can_wrap):

            # the obj may have the answer, so ask it about the ident
            adapt = getattr(obj, '__adapt__',None)
            if adapt:
                try:
                    retval = adapt(protocol,can_wrap)
                    # todo: if not can_wrap check retval for None or obj
                except AdaptForceNoneException:
                    return None
                if retval: return retval

            # the protocol may have the answer, so ask it about the obj
            pass

            # the context may have the answer, so ask it about the
            pass

            # check to see if the current object is ok as is
            if type(protocol) is types.TypeType or \
               type(protocol) is types.ClassType:
                if isinstance(obj,protocol):
                    return obj

            # ok... nothing matched, so return None
            return None

        def adapt(obj,protocol):
            return interal_adapt(obj,protocol,1)

        # imagine binary operator syntax
        def isa(obj,protocol):
            return interal_adapt(obj,protocol,0)

    -----------------------------------------------------------------
    test.py
    -----------------------------------------------------------------
        from adapter import adapt
        from adapter import isa
        from adapter import AdaptForceNoneException

        class KnightsWhoSayNi: pass  # shrubbry troubles

        class EggsOnly:  # an unrelated class/interface
            def eggs(self,str): print "eggs!" + str

        class HamOnly:  # used as an interface, no inhertance
            def ham(self,str): pass
            def _bugger(self): pass  # irritating a private member

        class SpamOnly: # a base class, inheritance used
            def spam(self,str): print "spam!" + str

        class EggsSpamAndHam (SpamOnly,KnightsWhoSayNi):
            def ham(self,str): print "ham!" + str
            def __adapt__(self,protocol,can_wrap):
                if protocol is HamOnly:
                    # implements HamOnly implicitly, no _bugger
                    return self
                if protocol is KnightsWhoSayNi:
                    # we are no longer the Knights who say Ni!
                    raise AdaptForceNoneException
                if protocol is EggsOnly and can_wrap:
                    # Knows how to create the eggs!
                    return EggsOnly()

        def test():
            x = EggsSpamAndHam()
            adapt(x,SpamOnly).spam("Ni!")
            adapt(x,EggsOnly).eggs("Ni!")
            adapt(x,HamOnly).ham("Ni!")
            adapt(x,EggsSpamAndHam).ham("Ni!")
            if None is adapt(x,KnightsWhoSayNi): print "IckIcky...!"
            if isa(x,SpamOnly): print "SpamOnly"
            if isa(x,EggsOnly): print "EggsOnly"
            if isa(x,HamOnly): print "HamOnly"
            if isa(x,EggsSpamAndHam): print "EggsAndSpam"
            if isa(x,KnightsWhoSayNi): print "NightsWhoSayNi"

    -----------------------------------------------------------------
    Example Run
    -----------------------------------------------------------------
        >>> import test
        >>> test.test()
        spam!Ni!
        eggs!Ni!
        ham!Ni!
        ham!Ni!
        IckIcky...!
        SpamOnly
        HamOnly
        EggsAndSpam

Relationship To Paul Prescod and Tim Hochbergs Type Assertion method

    The example syntax Paul put forth recently [2] was:

        interface Interface
            def __check__(self,obj)

    Pauls proposal adds the checking part to the third (3)
    case described in motiviation, when the protocol knows
    about the object.  As stated, this could be easily added
    as a step in the interal_adapt function:

            # the protocol may have the answer, so ask it about the obj

                if typ is types.Interface:
                    if typ__check__(obj):
                        return obj

    Further, and quite excitingly, if the syntax for this type 
    based assertion added an extra argument, "can_wrap", then this
    mechanism could be overloaded to also provide adapters to
    objects that the interface knows about.

    In short, the work put forth by Paul and company is great, and
    I dont see any problems why these two proposals couldnt work
    together in harmony, if not be completely complementary.

Relationship to Python Interfaces [1] by Michel Pelletier

    The relationship to this proposal is a bit less clear 
    to me, although an implements(obj,anInterface) built-in
    function was mentioned.  Thus, this could be added naively
    as a step in the interal_adapt function:

        if typ is types.Interface:
            if implements(obj,protocol):
                return obj

    However, there is a clear concern here.  Due to the 
    tight semantics being described in this specification,
    it is clear the isa operator proposed would have to have 
    a 1-1 correspondence with implements function, when the
    type of protocol is an Interface.  Thus, when can_wrap is
    true, __adapt__ may be called, however, it is clear that
    the return value would have to be double-checked.  Thus, 
    a more realistic change would be more like:

        def internal_interface_adapt(obj,interface)
            if implements(obj,interface):
                return obj
            else
                return None

        def interal_adapt(obj,protocol,can_wrap):

            # the obj may have the answer, so ask it about the ident
            adapt = getattr(obj, '__adapt__',None)
            if adapt:
                try:
                    retval = adapt(protocol,can_wrap)
                except AdaptForceNoneException:
                    if type(protocol) is types.Interface:
                        return internal_interface_adapt(obj,protocol)
                    else:
                        return None
                if retval: 
                    if type(protocol) is types.Interface:
                        if can_wrap and implements(retval,protocol):
                            return retval
                        return internal_interface_adapt(obj,protocol)
                    else:
                        return retval

            if type(protocol) is types.Interface:
                return internal_interface_adapt(obj,protocol)

            # remainder of function... 

    It is significantly more complicated, but doable.

Relationship To Iterator Proposal:
 
    The iterator special interest group is proposing a new built-in
    called "__iter__", which could be replaced with __adapt__ if an
    an Interator class is introduced.  Following is an example.

        class Iterator:
            def next(self):
                raise IndexError

        class IteratorTest:
            def __init__(self,max):
                self.max = max
            def __adapt__(self,protocol,can_wrap):
                if protocol is Iterator and can_wrap:
                    class IteratorTestIterator(Iterator):
                        def __init__(self,max):
                            self.max = max
                            self.count = 0
                        def next(self):
                            self.count = self.count + 1
                            if self.count < self.max:
                              return self.count
                            return Iterator.next(self)
                    return IteratorTestIterator(self.max)

Relationships To Microsofts Query Interface:

    Although this proposal may sounds similar to Microsofts 
    QueryInterface, it differs by a number of aspects.  First, 
    there is not a special "IUnknown" interface which can be used
    for object identity, although this could be proposed as one
    of those "special" blessed interface protocol identifiers.
    Second, with QueryInterface, once an object supports a particular
    interface it must always there after support this interface; 
    this proposal makes no such guarantee, although this may be 
    added at a later time. Third, implementations of Microsofts
    QueryInterface must support a kind of equivalence relation. 
    By reflexive they mean the querying an interface for itself 
    must always succeed.  By symmetrical they mean that if one 
    can successfully query an interface IA for a second interface 
    IB, then one must also be able to successfully query the 
    interface IB for IA.  And finally, by transitive they mean if 
    one can successfully query IA for IB and one can successfully
    query IB for IC, then one must be able to successfully query 
    IA for IC.  Ability to support this type of equivalence relation
    should be encouraged, but may not be possible.  Further research 
    on this topic (by someone familiar with Microsoft COM) would be
    helpful in further determining how compatible this proposal is.

Backwards Compatibility

    There should be no problem with backwards compatibility.  
    Indeed this proposal, save an built-in adapt() function, 
    could be tested without changes to the interpreter.

Questions and Answers

    Q:  Why was the name changed from __query__ to __adapt__ ?  

    A:  It was clear that significant QueryInterface assumptions were
        being laid upon the proposal, when the intent was more of an 
        adapter.  Of course, if an object does not need to be adapted
        then it can be used directly and this is the basic premise.

    Q:  Why is the checking mechansim mixed with the adapter
        mechanism.

    A:  Good question.  They could be seperated, however, there
        is significant overlap, if you consider the checking
        protocol as returning a compliant object (self) or
        not a compliant object (None).  In this way, adapting
        becomes a special case of checking, via the can_wrap.

        Really, this could be seperated out, but the two 
        concepts are very related so much duplicate work
        would be done, and the overall mechanism would feel
        quite a bit less unified.

    Q:  This is just a type-coercion proposal.

    A:  No. Certainly it could be used for type-coercion, such
        coercion would be explicit via __adapt__ or adapt function. 
        Of course, if this was used for iterator interface, then the
        for construct may do an implicit __adapt__(Iterator) but
        this would be an exception rather than the rule.

    Q:  Why did the author write this PEP?

    A:  He wanted a simple proposal that covered the "deep part" of
        interfaces without getting tied up in signature woes.  Also, it
        was clear that __iter__ proposal put forth is just an example
        of this type of interface.  Further, the author is doing XML 
        based client server work, and wants to write generic tree based
        algorithms that work on particular interfaces and would
        like these algorithms to be used by anyone willing to make
        an "adapter" having the interface required by the algorithm.

    Q:  Is this in opposition to the type special interest group?

    A:  No.  It is meant as a simple, need based solution that could
        easily complement the efforts by that group.

    Q:  Why was the identifier changed from a string to a class?

    A:  This was done on Michel Pelletiers suggestion.  This mechanism
        appears to be much cleaner than the DNS string proposal, which 
        caused a few eyebrows to rise.  

    Q:  Why not handle the case where instances are used to identify 
        protocols?  In other words, 6 isa 6 (where the 6 on the right
        is promoted to an types.Int

    A:  Sounds like someone might object, lets keep this in a
        separate proposal.

    Q:  Why not let obj isa obj be true?  or class isa baseclass?

    A:  Sounds like someone might object, lets keep this in a
        separate proposal.

    Q:  It seems that a reverse lookup could be used, why not add this?

    A:  There are many other lookup and/or checking mechanisms that
        could be used here.  However, the goal of this PEP is to be 
        small and sweet ... having any more functionality would make
        it more objectionable to some people.  However, this proposal
        was designed in large part to be completely orthogonal to other
        methods, so these mechanisms can be added later if needed

Credits

    This proposal was created in large part by the feedback 
    of the talented individuals on both the main mailing list
    and also the type signature list.  Specific contributors
    include (sorry if I missed someone).

        Robin Thomas, Paul Prescod, Michel Pelletier, 
        Alex Martelli, Jeremy Hylton, Carlos Ribeiro,
        Aahz Maruch, Fredrik Lundh, Rainer Deyke,
        Timothy Delaney, and Huaiyu Zhu

Copyright

    This document has been placed in the public domain.


References and Footnotes

    [1] http://python.sourceforge.net/peps/pep-0245.html
    [2] http://mail.python.org/pipermail/types-sig/2001-March/001223.html
    [3] http://www.zope.org/Members/michel/types-sig/TreasureTrove
    [4] http://mail.python.org/pipermail/types-sig/2001-March/001105.html
    [5] http://mail.python.org/pipermail/types-sig/2001-March/001206.html
    [6] http://mail.python.org/pipermail/types-sig/2001-March/001223.html





From thomas at xs4all.net  Thu Mar 22 12:14:48 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 12:14:48 +0100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Mar 22, 2001 at 01:39:05PM +1200
References: <012601c0b1d8$7dc3cc50$e46940d5@hagrid> <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>
Message-ID: <20010322121448.T29286@xs4all.nl>

On Thu, Mar 22, 2001 at 01:39:05PM +1200, Greg Ewing wrote:
> Fredrik Lundh <fredrik at effbot.org>:

> > I associate "yield" with non-preemptive threading (yield
> > to anyone else, not necessarily my caller).

> Well, this flavour of generators is sort of a special case
> subset of non-preemptive threading, so the usage is not
> entirely inconsistent.

I prefer yield, but I'll yield to suspend as long as we get coroutines or
suspendable frames so I can finish my Python-embedded MUX with
task-switching Python code :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Thu Mar 22 14:51:16 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 08:51:16 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: Your message of "Wed, 21 Mar 2001 22:16:30 EST."
             <15033.28302.876972.730118@anthem.wooz.org> 
References: <3AB87C4E.450723C2@lemburg.com> <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>  
            <15033.28302.876972.730118@anthem.wooz.org> 
Message-ID: <200103221351.IAA25632@cj20424-a.reston1.va.home.com>

> >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:
> 
>     GE> Wouldn't it be better to fix these functions on the Mac
>     GE> instead of depriving everyone else of them?
> 
> Either way, shutil sure is useful!

Yes, but deceptively so.  What should we do?  Anyway, it doesn't
appear to be officially deprecated yet (can't see it in the docs) and
I think it may be best to keep it that way.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pf at artcom-gmbh.de  Thu Mar 22 15:17:46 2001
From: pf at artcom-gmbh.de (Peter Funk)
Date: Thu, 22 Mar 2001 15:17:46 +0100 (MET)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103221351.IAA25632@cj20424-a.reston1.va.home.com> from Guido van Rossum at "Mar 22, 2001  8:51:16 am"
Message-ID: <m14g5uN-000CnEC@artcom0.artcom-gmbh.de>

Hi,

Guido van Rossum schrieb:
> > >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:
> > 
> >     GE> Wouldn't it be better to fix these functions on the Mac
> >     GE> instead of depriving everyone else of them?
> > 
> > Either way, shutil sure is useful!
> 
> Yes, but deceptively so.  What should we do?  Anyway, it doesn't
> appear to be officially deprecated yet (can't see it in the docs) and
> I think it may be best to keep it that way.

A very simple idea would be, to provide two callback hooks,
which will be invoked by each call to copyfile or remove.

Example:  Someone uses the package netatalk on Linux to provide file
services to Macs.  netatalk stores the resource forks in hidden sub
directories called .AppleDouble.  The callback function could than
copy the .AppleDouble/files around using shutil.copyfile itself.

Regards, Peter




From fredrik at effbot.org  Thu Mar 22 15:37:59 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Thu, 22 Mar 2001 15:37:59 +0100
Subject: [Python-Dev] booted from sourceforge
Message-ID: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>

attempts to access the python project, the tracker (etc) results in:

    You don't have permission to access <whatever> on this server.

is it just me?

Cheers /F




From thomas at xs4all.net  Thu Mar 22 15:44:29 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 15:44:29 +0100
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.57243.391141.409534@mace.lucasdigital.com>; from tommy@ilm.com on Wed, Mar 21, 2001 at 09:08:49AM -0800
References: <15032.22504.605383.113425@mace.lucasdigital.com> <20010321140704.R29286@xs4all.nl> <15032.57243.391141.409534@mace.lucasdigital.com>
Message-ID: <20010322154429.W27808@xs4all.nl>

On Wed, Mar 21, 2001 at 09:08:49AM -0800, Flying Cougar Burnette wrote:

> with these changes to test_pty.py I now get:

> test_pty
> The actual stdout doesn't match the expected stdout.
> This much did match (between asterisk lines):
> **********************************************************************
> test_pty
> **********************************************************************
> Then ...
> We expected (repr): 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
> But instead we got: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n'
> test test_pty failed -- Writing: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n', expected: 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
> 
> but when I import test.test_pty that blank line is gone.  Sounds like
> the test verification just needs to be a bit more flexible, maybe?

Yes... I'll explicitly turn \r\n into \n (at the end of the string) so the
test can still use the normal print/stdout-checking routines (mostly because
I want to avoid doing the error reporting myself) but it would still barf if
the read strings contain other trailing garbage or extra whitespace and
such.

I'll check in a new version in a few minutes.. Let me know if it still has
problems.

> test_openpty passes without a problem, BTW.

Good... so at least that works ;-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Mar 22 15:45:57 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 15:45:57 +0100
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>; from fredrik@effbot.org on Thu, Mar 22, 2001 at 03:37:59PM +0100
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
Message-ID: <20010322154557.A13066@xs4all.nl>

On Thu, Mar 22, 2001 at 03:37:59PM +0100, Fredrik Lundh wrote:
> attempts to access the python project, the tracker (etc) results in:

>     You don't have permission to access <whatever> on this server.

> is it just me?

I noticed this yesterday as well, but only for a few minutes. I wasn't on SF
for long, though, so I might have hit it again if I'd tried once more. I
suspect they are/were commissioning a new (set of) webserver(s) in the pool,
and they screwed up the permissions.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Thu Mar 22 15:55:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 09:55:37 -0500
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: Your message of "Thu, 22 Mar 2001 15:37:59 +0100."
             <000f01c0b2dd$b477a3b0$e46940d5@hagrid> 
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid> 
Message-ID: <200103221455.JAA25875@cj20424-a.reston1.va.home.com>

> attempts to access the python project, the tracker (etc) results in:
> 
>     You don't have permission to access <whatever> on this server.
> 
> is it just me?
> 
> Cheers /F

No, it's SF.  From their most recent mailing (this morning!) to the
customer:

"""The good news is, it is unlikely SourceForge.net will have any
power related downtime.  In December we moved the site to Exodus, and
they have amble backup power systems to deal with the on going
blackouts."""

So my expectation that it's a power failure -- system folks are
notoriously optimistic about the likelihood of failures... :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)




From fdrake at acm.org  Thu Mar 22 15:57:47 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Thu, 22 Mar 2001 09:57:47 -0500 (EST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103221351.IAA25632@cj20424-a.reston1.va.home.com>
References: <3AB87C4E.450723C2@lemburg.com>
	<200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
	<15033.28302.876972.730118@anthem.wooz.org>
	<200103221351.IAA25632@cj20424-a.reston1.va.home.com>
Message-ID: <15034.4843.674513.237570@localhost.localdomain>

Guido van Rossum writes:
 > Yes, but deceptively so.  What should we do?  Anyway, it doesn't
 > appear to be officially deprecated yet (can't see it in the docs) and
 > I think it may be best to keep it that way.

  I don't think it's deceived me yet!  I see no reason to deprecate
it, and I don't recall anyone telling me it should be.  Nor do I
recall a discussion here suggesting that it should be.
  If it has hidden corners that I just haven't run into (and it *has*
been pointed out that it does have corners, at least on some
platforms), why don't we just consider those bugs that can be fixed?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From thomas at xs4all.net  Thu Mar 22 16:03:20 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 16:03:20 +0100
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: <200103221455.JAA25875@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 22, 2001 at 09:55:37AM -0500
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid> <200103221455.JAA25875@cj20424-a.reston1.va.home.com>
Message-ID: <20010322160320.B13066@xs4all.nl>

On Thu, Mar 22, 2001 at 09:55:37AM -0500, Guido van Rossum wrote:
> > attempts to access the python project, the tracker (etc) results in:
> > 
> >     You don't have permission to access <whatever> on this server.
> > 
> > is it just me?
> > 
> > Cheers /F

> [..] my expectation that it's a power failure -- system folks are
> notoriously optimistic about the likelihood of failures... :-)

It's quite uncommon for powerfailures to cause permission problems, though :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mwh21 at cam.ac.uk  Thu Mar 22 16:18:58 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 22 Mar 2001 15:18:58 +0000
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: "Fredrik Lundh"'s message of "Thu, 22 Mar 2001 15:37:59 +0100"
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
Message-ID: <m33dc5g7bx.fsf@atrus.jesus.cam.ac.uk>

"Fredrik Lundh" <fredrik at effbot.org> writes:

> attempts to access the python project, the tracker (etc) results in:
> 
>     You don't have permission to access <whatever> on this server.
> 
> is it just me?

I was getting this a lot yesterday.  Give it a minute, and try again -
worked for me, albeit somewhat tediously.

Cheers,
M.

-- 
  Just put the user directories on a 486 with deadrat7.1 and turn the
  Octane into the afforementioned beer fridge and keep it in your
  office. The lusers won't notice the difference, except that you're
  more cheery during office hours.              -- Pim van Riezen, asr




From gward at python.net  Thu Mar 22 17:50:43 2001
From: gward at python.net (Greg Ward)
Date: Thu, 22 Mar 2001 11:50:43 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: <034601c0b274$d8bab8c0$8119fea9@neil>; from nhodgson@bigpond.net.au on Thu, Mar 22, 2001 at 01:07:28PM +1100
References: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz> <034601c0b274$d8bab8c0$8119fea9@neil>
Message-ID: <20010322115043.A5993@cthulhu.gerg.ca>

On 22 March 2001, Neil Hodgson said:
>    Then they should be fixed for Windows as well where they don't copy
> secondary forks either. While not used much by native code, forks are
> commonly used on NT servers which serve files to Macintoshes.
> 
>    There is also the issue of other metadata. Should shutil optionally copy
> ownership information? Access Control Lists? Summary information? A really
> well designed module here could be very useful but quite some work.

There's a pretty good 'copy_file()' routine in the Distutils; I found
shutil quite inadequate, so rolled my own.  Jack Jansen patched it so it
does the "right thing" on Mac OS.  By now, it has probably copied many
files all over the place on all of your computers, so it sounds like it
works.  ;-)

See the distutils.file_util module for implementation and documentation.

        Greg
-- 
Greg Ward - Unix bigot                                  gward at python.net
http://starship.python.net/~gward/
Sure, I'm paranoid... but am I paranoid ENOUGH?



From fredrik at pythonware.com  Thu Mar 22 18:09:49 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 22 Mar 2001 18:09:49 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF> <3AB62EAE.FCFD7C9F@lemburg.com> <048401c0b172$dd6892a0$e46940d5@hagrid>
Message-ID: <01bd01c0b2f2$e8702fb0$e46940d5@hagrid>

> (and my plan is to make a statvfs subset available on
> all platforms, which makes your code even simpler...)

windows patch here:
http://sourceforge.net/tracker/index.php?func=detail&aid=410547&group_id=5470&atid=305470

guess it has to wait for 2.2, though...

Cheers /F




From greg at cosc.canterbury.ac.nz  Thu Mar 22 23:36:02 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 23 Mar 2001 10:36:02 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <m14g5uN-000CnEC@artcom0.artcom-gmbh.de>
Message-ID: <200103222236.KAA08215@s454.cosc.canterbury.ac.nz>

pf at artcom-gmbh.de (Peter Funk):

> netatalk stores the resource forks in hidden sub
> directories called .AppleDouble.

None of that is relevant if the copying is being done from
the Mac end. To the Mac it just looks like a normal Mac
file, so the standard Mac file-copying techniques will work.
No need for any callbacks.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tommy at ilm.com  Fri Mar 23 00:03:29 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Thu, 22 Mar 2001 15:03:29 -0800 (PST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
Message-ID: <15034.33486.157946.686067@mace.lucasdigital.com>

Hey Folks,

When running an interactive interpreter python currently tries to
import "readline", ostensibly to make your interactive experience a
little easier (with history, extra keybindings, etc).  For a while now 
we python has also shipped with a standard module called "rlcompleter" 
which adds name completion to the readline functionality.

Can anyone think of a good reason why we don't import rlcompleter
instead of readline by default?  I can give you a good reason why it
*should*, but I'd rather not bore anyone with the details if I don't
have to.

All in favor, snag the following patch....


------------%< snip %<----------------------%< snip %<------------

Index: Modules/main.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Modules/main.c,v
retrieving revision 1.51
diff -r1.51 main.c
290c290
<               v = PyImport_ImportModule("readline");
---
>               v = PyImport_ImportModule("rlcompleter");



From pf at artcom-gmbh.de  Fri Mar 23 00:10:46 2001
From: pf at artcom-gmbh.de (Peter Funk)
Date: Fri, 23 Mar 2001 00:10:46 +0100 (MET)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103222236.KAA08215@s454.cosc.canterbury.ac.nz> from Greg Ewing at "Mar 23, 2001 10:36: 2 am"
Message-ID: <m14gEEA-000CnEC@artcom0.artcom-gmbh.de>

Hi,

> pf at artcom-gmbh.de (Peter Funk):
> > netatalk stores the resource forks in hidden sub
> > directories called .AppleDouble.

Greg Ewing:
> None of that is relevant if the copying is being done from
> the Mac end. To the Mac it just looks like a normal Mac
> file, so the standard Mac file-copying techniques will work.
> No need for any callbacks.

You are right and I know this.  But if you program an application,
which should work on the Unix/Linux side (for example a filemanager
or something similar), you have to pay attention to this files on
your own.  The same holds true for thumbnail images usually stored
in a .xvpics subdirectory.

All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
in this respect.

Regards, Peter
P.S.: I'm not going to write a GUI file manager in Python and using
shutil right now.  So this discussion is somewhat academic.
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)




From tim.one at home.com  Fri Mar 23 04:03:03 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 22 Mar 2001 22:03:03 -0500
Subject: [Python-Dev] CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEKNJHAA.tim.one@home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAELCJHAA.tim.one@home.com>

At work today, Guido and I both found lots of instabilities in current CVS
Python, under different flavors of Windows:  senseless errors in the test
suite, different behavior across runs, NULL-pointer errors in GC when running
under a debug-build Python, some kind of Windows "app error" alert box, and
weird complaints about missing attributes during Python shutdown.

Back at home, things *seem* much better, but I still get one of the errors I
saw at the office:  a NULL-pointer dereference in GC, using a debug-build
Python, in test_xmllib, while *compiling* xmllib.pyc (i.e., we're not
actually running the test yet, just compiling the module).  Alas, this does
not fail in isolation, it's only when a run of the whole test suite happens
to get to that point.  The error is in gc_list_remove, which is passed a node
whose left and right pointers are both NULL.

Only thing I know for sure is that it's not PyDict_Next's fault (I did a
quick run with *that* change commented out; made no difference).  That wasn't
just paranoia:  dict_traverse is two routines down the call stack when this
happens, and that uses PyDict_Next.

How's life on other platforms?  Anyone else ever build/test the debug Python?
Anyone have a hot efence/Insure raring to run?

not-picky-about-the-source-of-miracles-ly y'rs  - tim




From guido at digicool.com  Fri Mar 23 05:34:48 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 23:34:48 -0500
Subject: [Python-Dev] Re: CVS Python is unstable
Message-ID: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>

Tim's problem can be reproduced in debug mode as follows (on Windows
as well as on Linux):

    import test.test_weakref
    import test.test_xmllib

Boom!  The debugger (on Windows) shows that it does in some GC code.

After backing out Fred's last change to _weakref.c, this works as
expected and I get no other problems.

So I propose to back out that change and be done with it.

Here's the CVS comment:

----------------------------
revision 1.8
date: 2001/03/22 18:05:30;  author: fdrake;  state: Exp;  lines: +1 -1

Inform the cycle-detector that the a weakref object no longer needs to be
tracked as soon as it is clear; this can decrease the number of roots for
the cycle detector sooner rather than later in applications which hold on
to weak references beyond the time of the invalidation.
----------------------------

And the diff, to be backed out:

*** _weakref.c	2001/02/27 18:36:56	1.7
--- _weakref.c	2001/03/22 18:05:30	1.8
***************
*** 59,64 ****
--- 59,65 ----
      if (self->wr_object != Py_None) {
          PyWeakReference **list = GET_WEAKREFS_LISTPTR(self->wr_object);
  
+         PyObject_GC_Fini((PyObject *)self);
          if (*list == self)
              *list = self->wr_next;
          self->wr_object = Py_None;
***************
*** 78,84 ****
  weakref_dealloc(PyWeakReference *self)
  {
      clear_weakref(self);
-     PyObject_GC_Fini((PyObject *)self);
      self->wr_next = free_list;
      free_list = self;
  }
--- 79,84 ----

Fred, can you explain what the intention of this code was?

It's not impossible that the bug is actually in the debug mode macros,
but I'd rather not ship code that's instable in debug mode -- that
defeats the purpose.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Fri Mar 23 06:10:33 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 00:10:33 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>

[Guido]
> It's not impossible that the bug is actually in the debug mode macros,
> but I'd rather not ship code that's instable in debug mode -- that
> defeats the purpose.

I *suspect* the difference wrt debug mode is right where it's blowing up:

static void
gc_list_remove(PyGC_Head *node)
{
	node->gc_prev->gc_next = node->gc_next;
	node->gc_next->gc_prev = node->gc_prev;
#ifdef Py_DEBUG
	node->gc_prev = NULL;
	node->gc_next = NULL;
#endif
}

That is, in debug mode, the prev and next fields are nulled out, but not in
release mode.

Whenever this thing dies, the node passed in has prev and next fields that
*are* nulled out.  Since under MS debug mode, freed memory is set to a very
distinctive non-null bit pattern, this tells me that-- most likely --some
single node is getting passed to gc_list_remove *twice*.

I bet that's happening in release mode too ... hang on a second ... yup!  If
I remove the #ifdef above, then the pair test_weakref test_xmllib dies with a
null-pointer error here under the release build too.

and-that-ain't-good-ly y'rs  - tim




From tim.one at home.com  Fri Mar 23 06:56:05 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 00:56:05 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELNJHAA.tim.one@home.com>

More info on the debug-mode

    test_weakref test_xmllib

blowup in gc_list_append, and with the .pyc files already there.

While running test_weakref, we call collect() once.

Ditto while running test_xmllib:  that's when it blows up.

collect_generations() is here (***):

	else {
		generation = 0;
		collections0++;
		if (generation0.gc_next != &generation0) {
***			n = collect(&generation0, &generation1);
		}
	}

collect() is here:

	gc_list_init(&reachable);
	move_roots(young, &reachable);
***	move_root_reachable(&reachable);

move_root_reachable is here:

***		(void) traverse(op,
			       (visitproc)visit_reachable,
			       (void *)reachable);

And that's really calling dict_traverse, which is iterating over the dict.

At blowup time, the dict key is of PyString_Type, with value "ref3", and so
presumably left over from test_weakref.  The dict value is of
PyWeakProxy_Type, has a refcount of 2, and has

    wr_object   pointing to Py_NoneStruct
    wr_callback NULL
    hash        0xffffffff
    wr_prev     NULL
    wr_next     NULL

It's dying while calling visit() (really visit_reachable) on the latter.

Inside visit_reachable, we have:

		if (gc && gc->gc_refs != GC_MOVED) {

and that's interesting too, because gc->gc_refs is 0xcdcdcdcd, which is the
MS debug-mode "clean landfill" value:  freshly malloc'ed memory is filled
with 0xcd bytes (so gc->gc_refs is uninitialized trash).

My conclusion:  it's really hosed.  Take it away, Neil <wink>!




From tim.one at home.com  Fri Mar 23 07:19:19 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 01:19:19 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>

> So I propose to back out that change and be done with it.

I just did revert the change (rev 1.8 of _weakref.c, back to 1.7), so anyone
interested in pursuing the details should NOT update.

There's another reason for not updating then:  the problem "went away" after
the next big pile of checkins, even before I reverted the change.  I assume
that's simply because things got jiggled enough so that we no longer hit
exactly the right sequence of internal operations.




From fdrake at acm.org  Fri Mar 23 07:50:21 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 01:50:21 -0500 (EST)
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > That is, in debug mode, the prev and next fields are nulled out, but not in
 > release mode.
 > 
 > Whenever this thing dies, the node passed in has prev and next fields that
 > *are* nulled out.  Since under MS debug mode, freed memory is set to a very
 > distinctive non-null bit pattern, this tells me that-- most likely --some
 > single node is getting passed to gc_list_remove *twice*.
 > 
 > I bet that's happening in release mode too ... hang on a second ... yup!  If
 > I remove the #ifdef above, then the pair test_weakref test_xmllib dies with a
 > null-pointer error here under the release build too.

  Ok, I've been trying to keep up with all this, and playing with some
alternate patches.  The change that's been identified as causing the
problem was trying to remove the weak ref from the cycle detectors set
of known containers as soon as the ref object was no longer a
container.  When this is done by the tp_clear handler may be the
problem; the GC machinery is removing the object from the list, and
calls gc_list_remove() assuming that the object is still in the list,
but after the tp_clear handler has been called.
  I see a couple of options:

  - Document the restriction that PyObject_GC_Fini() should not be
    called on an object while it's tp_clear handler is active (more
    efficient), -or-
  - Remove the restriction (safer).

  If we take the former route, I think it is still worth removing the
weakref object from the GC list as soon as it has been cleared, in
order to keep the number of containers the GC machinery has to inspect
at a minimum.  This can be done by adding a flag to
weakref.c:clear_weakref() indicating that the object's tp_clear is
active.  The extra flag would not be needed if we took the second
option.
  Another possibility, if I do adjust the code to remove the weakref
objects from the GC list aggressively, is to only call
PyObject_GC_Init() if the weakref actually has a callback -- if there
is no callback, the weakref object does not act as a container to
begin with.
  (It is also possible that with agressive removal of the weakref
object from the set of containers, it doesn't need to implement the
tp_clear handler at all, in which case this gets just a little bit
nicer.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From nas at arctrix.com  Fri Mar 23 14:41:02 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 05:41:02 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 01:19:19AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>
Message-ID: <20010323054102.A28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 01:19:19AM -0500, Tim Peters wrote:
> There's another reason for not updating then:  the problem "went away" after
> the next big pile of checkins, even before I reverted the change.  I assume
> that's simply because things got jiggled enough so that we no longer hit
> exactly the right sequence of internal operations.

Yes.

  Neil



From nas at arctrix.com  Fri Mar 23 14:47:40 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 05:47:40 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 12:10:33AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <20010323054740.B28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 12:10:33AM -0500, Tim Peters wrote:
> I *suspect* the difference wrt debug mode is right where it's blowing up:
> 
> static void
> gc_list_remove(PyGC_Head *node)
> {
> 	node->gc_prev->gc_next = node->gc_next;
> 	node->gc_next->gc_prev = node->gc_prev;
> #ifdef Py_DEBUG
> 	node->gc_prev = NULL;
> 	node->gc_next = NULL;
> #endif
> }

PyObject_GC_Fini() should not be called twice on the same object
unless there is a PyObject_GC_Init() in between.  I suspect that
Fred's change made this happen.  When Py_DEBUG is not defined the
GC will do all sorts of strange things if you do this, hence the
debugging code.

  Neil



From nas at arctrix.com  Fri Mar 23 15:08:24 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 06:08:24 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>; from fdrake@acm.org on Fri, Mar 23, 2001 at 01:50:21AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com> <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>
Message-ID: <20010323060824.C28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 01:50:21AM -0500, Fred L. Drake, Jr. wrote:
> The change that's been identified as causing the problem was
> trying to remove the weak ref from the cycle detectors set of
> known containers as soon as the ref object was no longer a
> container.

I'm not sure what you mean by "no longer a container".  If the
object defines the GC type flag the GC thinks its a container.

> When this is done by the tp_clear handler may be the problem;
> the GC machinery is removing the object from the list, and
> calls gc_list_remove() assuming that the object is still in the
> list, but after the tp_clear handler has been called.

I believe your problems are deeper than this.  If
PyObject_IS_GC(op) is true and op is reachable from other objects
known to the GC then op must be in the linked list.  I haven't
tracked down all the locations in gcmodule where this assumption
is made but visit_reachable is one example.

We could remove this restriction if we were willing to accept
some slowdown.  One way would be to add the invariant
(gc_next == NULL) if the object is not in the GC list.  PyObject_Init
and gc_list_remove would have to set this pointer.  Is it worth
doing?

  Neil



From gward at python.net  Fri Mar 23 16:04:07 2001
From: gward at python.net (Greg Ward)
Date: Fri, 23 Mar 2001 10:04:07 -0500
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <15034.33486.157946.686067@mace.lucasdigital.com>; from tommy@ilm.com on Thu, Mar 22, 2001 at 03:03:29PM -0800
References: <15034.33486.157946.686067@mace.lucasdigital.com>
Message-ID: <20010323100407.A8367@cthulhu.gerg.ca>

On 22 March 2001, Flying Cougar Burnette said:
> Can anyone think of a good reason why we don't import rlcompleter
> instead of readline by default?  I can give you a good reason why it
> *should*, but I'd rather not bore anyone with the details if I don't
> have to.

Haven't tried your patch, but when you "import rlcompleter" manually in
an interactive session, that's not enough.  You also have to call

  readline.parse_and_bind("tab: complete")

*Then* <tab> does the right thing (ie. completion in the interpreter's
global namespace).  I like it, but I'll bet Guido won't because you can
always do this:

  $ cat > ~/.pythonrc
  import readline, rlcompleter
  readline.parse_and_bind("tab: complete")

and put "export PYTHONSTARTUP=~/.pythonrc" in your ~/.profile (or
whatever) to achieve the same effect.

But I think having this convenience built-in for free would be a very
nice thing.  I used Python for over a year before I found out about
PYTHONSTARTUP, and it was another year after that that I learnedabout
readline.parse_and_bind().  Why not save future newbies the bother?

        Greg
-- 
Greg Ward - Linux nerd                                  gward at python.net
http://starship.python.net/~gward/
Animals can be driven crazy by placing too many in too small a pen. 
Homo sapiens is the only animal that voluntarily does this to himself.



From fdrake at acm.org  Fri Mar 23 16:22:37 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:22:37 -0500 (EST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323100407.A8367@cthulhu.gerg.ca>
References: <15034.33486.157946.686067@mace.lucasdigital.com>
	<20010323100407.A8367@cthulhu.gerg.ca>
Message-ID: <15035.27197.714696.640238@localhost.localdomain>

Greg Ward writes:
 > But I think having this convenience built-in for free would be a very
 > nice thing.  I used Python for over a year before I found out about
 > PYTHONSTARTUP, and it was another year after that that I learnedabout
 > readline.parse_and_bind().  Why not save future newbies the bother?

  Maybe.  Or perhaps you should have looked at the tutorial?  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From jeremy at alum.mit.edu  Fri Mar 23 16:31:56 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Fri, 23 Mar 2001 10:31:56 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>

Are there any more checkins coming?

In general -- are there any checkins other than documentation and a
fix for the GC/debug/weakref problem?

Jeremy



From fdrake at acm.org  Fri Mar 23 16:35:24 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:35:24 -0500 (EST)
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <20010323060824.C28875@glacier.fnational.com>
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
	<15034.61997.299305.456415@cj42289-a.reston1.va.home.com>
	<20010323060824.C28875@glacier.fnational.com>
Message-ID: <15035.27964.645249.362484@localhost.localdomain>

Neil Schemenauer writes:
 > I'm not sure what you mean by "no longer a container".  If the
 > object defines the GC type flag the GC thinks its a container.

  Given the assumptions you describe, removing the object from the
list isn't sufficient to not be a container.  ;-(  In which case
reverting the change (as Tim did) is probably the only way to do it.
  What I was looking for was a way to remove the weakref object from
the set of containers sooner, but appearantly that isn't possible as
long as the object's type is the only used to determine whether it is
a container.

 > I believe your problems are deeper than this.  If
 > PyObject_IS_GC(op) is true and op is reachable from other objects

  And this only considers the object's type; the object can't be
removed from the set of containers by call PyObject_GC_Fini().  (It
clearly can't while tp_clear is active for that object!)

 > known to the GC then op must be in the linked list.  I haven't
 > tracked down all the locations in gcmodule where this assumption
 > is made but visit_reachable is one example.

  So it's illegal to call PyObject_GC_Fini() anywhere but from the
destructor?  Please let me know so I can make this clear in the
documentation!

 > We could remove this restriction if we were willing to accept
 > some slowdown.  One way would be to add the invariant
 > (gc_next == NULL) if the object is not in the GC list.  PyObject_Init
 > and gc_list_remove would have to set this pointer.  Is it worth
 > doing?

  It's not at all clear that we need to remove the restriction --
documenting it would be required.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From ping at lfw.org  Fri Mar 23 16:44:54 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 07:44:54 -0800 (PST)
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Jeremy Hylton wrote:
> Are there any more checkins coming?

There are still issues in pydoc to be solved, but i think they can
be reasonably considered bugfixes rather than new features.  The
two main messy ones are getting reloading right (i am really hurting
for lack of a working find_module here!) and handling more strange
aliasing cases (HTMLgen, for example, provides many classes under
multiple names).  I hope it will be okay for me to work on these two
main fixes in the coming week.


-- ?!ng




From guido at digicool.com  Fri Mar 23 16:45:04 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 10:45:04 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: Your message of "Fri, 23 Mar 2001 10:31:56 EST."
             <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>  
            <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>

> Are there any more checkins coming?
> 
> In general -- are there any checkins other than documentation and a
> fix for the GC/debug/weakref problem?

I think one more from Ping, for a detail in sys.excepthook.

The GC issue is dealt with as far as I'm concerned -- any changes that
Neil suggests are too speculative to attempt this late in the game,
and Fred's patch has already been backed out by Tim.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar 23 16:49:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 10:49:13 -0500
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: Your message of "Fri, 23 Mar 2001 07:44:54 PST."
             <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org> 
References: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org> 
Message-ID: <200103231549.KAA10977@cj20424-a.reston1.va.home.com>

> There are still issues in pydoc to be solved, but i think they can
> be reasonably considered bugfixes rather than new features.  The
> two main messy ones are getting reloading right (i am really hurting
> for lack of a working find_module here!) and handling more strange
> aliasing cases (HTMLgen, for example, provides many classes under
> multiple names).  I hope it will be okay for me to work on these two
> main fixes in the coming week.

This is fine after the b2 release.  I consider pydoc a "1.0" release
anyway, so it's okay if its development speed is different than that
of the rest of Python!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From nas at arctrix.com  Fri Mar 23 16:53:15 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 07:53:15 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <15035.27964.645249.362484@localhost.localdomain>; from fdrake@acm.org on Fri, Mar 23, 2001 at 10:35:24AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com> <15034.61997.299305.456415@cj42289-a.reston1.va.home.com> <20010323060824.C28875@glacier.fnational.com> <15035.27964.645249.362484@localhost.localdomain>
Message-ID: <20010323075315.A29414@glacier.fnational.com>

On Fri, Mar 23, 2001 at 10:35:24AM -0500, Fred L. Drake, Jr. wrote:
>   So it's illegal to call PyObject_GC_Fini() anywhere but from the
> destructor?  Please let me know so I can make this clear in the
> documentation!

No, its okay as long as the object is not reachable from other
objects.  When tuples are added to the tuple free-list
PyObject_GC_Fini() is called.  When they are removed
PyObject_GC_Init() is called.  This is okay because free tubles
aren't reachable from anywhere else.

> It's not at all clear that we need to remove the restriction --
> documenting it would be required.

Yah, sorry about that.  I had forgotten about that restriction.
When I saw Tim's message things started to come back to me.  I
had to study the code a bit to remember how things worked.

  Neil



From aahz at panix.com  Fri Mar 23 16:46:54 2001
From: aahz at panix.com (aahz at panix.com)
Date: Fri, 23 Mar 2001 10:46:54 -0500 (EST)
Subject: [Python-Dev] Re: Python T-shirts
References: <mailman.985019605.8781.python-list@python.org>
Message-ID: <200103231546.KAA29483@panix6.panix.com>

[posted to c.l.py with cc to python-dev]

In article <mailman.985019605.8781.python-list at python.org>,
Guido van Rossum  <guido at digicool.com> wrote:
>
>At the conference we handed out T-shirts with the slogan on the back
>"Python: programming the way Guido indented it".  We've been asked if
>there are any left.  Well, we gave them all away, but we're ordering
>more.  You can get them for $10 + S+H.  Write to Melissa Light
><melissa at digicool.com>.  Be nice to her!

If you're in the USA, S&H is $3.50, for a total cost of $13.50.  Also,
at the conference, all t-shirts were size L, but Melissa says that
she'll take size requests (since they haven't actually ordered the
t-shirts yet).
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"I won't accept a model of the universe in which free will, omniscient
gods, and atheism are simultaneously true."  -- M
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"I won't accept a model of the universe in which free will, omniscient
gods, and atheism are simultaneously true."  -- M



From nas at arctrix.com  Fri Mar 23 16:55:15 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 07:55:15 -0800
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 23, 2001 at 10:45:04AM -0500
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net> <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net> <200103231545.KAA10940@cj20424-a.reston1.va.home.com>
Message-ID: <20010323075515.B29414@glacier.fnational.com>

On Fri, Mar 23, 2001 at 10:45:04AM -0500, Guido van Rossum wrote:
> The GC issue is dealt with as far as I'm concerned -- any changes that
> Neil suggests are too speculative to attempt this late in the game,
> and Fred's patch has already been backed out by Tim.

I agree.

  Neil



From ping at lfw.org  Fri Mar 23 16:56:56 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 07:56:56 -0800 (PST)
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>
Message-ID: <Pine.LNX.4.10.10103230750340.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Ka-Ping Yee wrote:
> two main messy ones are getting reloading right (i am really hurting
> for lack of a working find_module here!)

I made an attempt at this last night but didn't finish, so reloading
isn't correct at the moment for submodules in packages.  It appears
that i'm going to have to built a few pieces of infrastructure to make
it work well: a find_module that understands packages, a sure-fire
way of distinguishing the different kinds of ImportError, and a
reliable reloader in the end.  The particular issue of incompletely-
imported modules is especially thorny, and i don't know if there's
going to be any good solution for that.

Oh, and it would be nice for the "help" object to be a little more
informative, but that could just be considered documentation; and
a test_pydoc suite would be good.


-- ?!ng




From fdrake at acm.org  Fri Mar 23 16:55:10 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:55:10 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
	<15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103231545.KAA10940@cj20424-a.reston1.va.home.com>
Message-ID: <15035.29150.755915.883372@localhost.localdomain>

Guido van Rossum writes:
 > The GC issue is dealt with as far as I'm concerned -- any changes that
 > Neil suggests are too speculative to attempt this late in the game,
 > and Fred's patch has already been backed out by Tim.

  Agreed -- I don't think we need to change this further for 2.1.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From thomas at xs4all.net  Fri Mar 23 17:31:38 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 23 Mar 2001 17:31:38 +0100
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323100407.A8367@cthulhu.gerg.ca>; from gward@python.net on Fri, Mar 23, 2001 at 10:04:07AM -0500
References: <15034.33486.157946.686067@mace.lucasdigital.com> <20010323100407.A8367@cthulhu.gerg.ca>
Message-ID: <20010323173138.E13066@xs4all.nl>

On Fri, Mar 23, 2001 at 10:04:07AM -0500, Greg Ward wrote:

> But I think having this convenience built-in for free would be a very
> nice thing.  I used Python for over a year before I found out about
> PYTHONSTARTUP, and it was another year after that that I learnedabout
> readline.parse_and_bind().  Why not save future newbies the bother?

And break all those poor users who use tab in interactive mode (like *me*)
to mean tab, not 'complete me please' ? No, please don't do that :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at acm.org  Fri Mar 23 18:43:55 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 12:43:55 -0500 (EST)
Subject: [Python-Dev] Doc/ tree frozen for 2.1b2 release
Message-ID: <15035.35675.217841.967860@localhost.localdomain>

  I'm freezing the doc tree until after the 2.1b2 release is made.
Please do not make any further checkins there.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From moshez at zadka.site.co.il  Fri Mar 23 20:08:22 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 23 Mar 2001 21:08:22 +0200
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
Message-ID: <E14gWv8-0001OB-00@darjeeling>

Now that we have rich comparisons, I've suddenly realized they are
not rich enough. Consider a set type.

>>> a = set([1,2])
>>> b = set([1,3])
>>> a>b
0
>>> a<b
0
>>> max(a,b) == a
1

While I'd like

>>> max(a,b) == set([1,2,3])
>>> min(a,b) == set([1])

In current Python, there's no way to do it.
I'm still thinking about this. If it bothers anyone else, I'd
be happy to know about it.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From fdrake at localhost.localdomain  Fri Mar 23 20:11:52 2001
From: fdrake at localhost.localdomain (Fred Drake)
Date: Fri, 23 Mar 2001 14:11:52 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010323191152.3019628995@localhost.localdomain>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


Documentation for the second beta release of Python 2.1.

This includes information on future statements and lexical scoping,
and weak references.  Much of the module documentation has been
improved as well.




From guido at digicool.com  Fri Mar 23 20:20:21 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 14:20:21 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: Your message of "Fri, 23 Mar 2001 21:08:22 +0200."
             <E14gWv8-0001OB-00@darjeeling> 
References: <E14gWv8-0001OB-00@darjeeling> 
Message-ID: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>

> Now that we have rich comparisons, I've suddenly realized they are
> not rich enough. Consider a set type.
> 
> >>> a = set([1,2])
> >>> b = set([1,3])
> >>> a>b
> 0
> >>> a<b
> 0

I'd expect both of these to raise an exception.

> >>> max(a,b) == a
> 1
> 
> While I'd like
> 
> >>> max(a,b) == set([1,2,3])
> >>> min(a,b) == set([1])

You shouldn't call that max() or min().  These functions are supposed
to return one of their arguments (or an item from their argument
collection), not a composite.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From ping at lfw.org  Fri Mar 23 20:35:43 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 11:35:43 -0800 (PST)
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <E14gWv8-0001OB-00@darjeeling>
Message-ID: <Pine.LNX.4.10.10103231134360.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Moshe Zadka wrote:
> >>> a = set([1,2])
> >>> b = set([1,3])
[...]
> While I'd like
> 
> >>> max(a,b) == set([1,2,3])
> >>> min(a,b) == set([1])

The operation you're talking about isn't really max or min.

Why not simply write:

    >>> a | b
    [1, 2, 3]
    >>> a & b
    [1]

?


-- ?!ng




From fdrake at acm.org  Fri Mar 23 21:38:55 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 15:38:55 -0500 (EST)
Subject: [Python-Dev] Anyone using weakrefs?
Message-ID: <15035.46175.599654.851399@localhost.localdomain>

  Is anyone out there playing with the weak references support yet?
I'd *really* appreciate receiving a short snippet of non-contrived
code that makes use of weak references to use in the documentation.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From tommy at ilm.com  Fri Mar 23 22:12:49 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Fri, 23 Mar 2001 13:12:49 -0800 (PST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323173138.E13066@xs4all.nl>
References: <15034.33486.157946.686067@mace.lucasdigital.com>
	<20010323100407.A8367@cthulhu.gerg.ca>
	<20010323173138.E13066@xs4all.nl>
Message-ID: <15035.48030.112179.717830@mace.lucasdigital.com>

But if we just change the readline import to rlcompleter and *don't*
do the parse_and_bind trick then your TABs will not be impacted,
correct?  Will we lose anything by making this switch?



Thomas Wouters writes:
| On Fri, Mar 23, 2001 at 10:04:07AM -0500, Greg Ward wrote:
| 
| > But I think having this convenience built-in for free would be a very
| > nice thing.  I used Python for over a year before I found out about
| > PYTHONSTARTUP, and it was another year after that that I learnedabout
| > readline.parse_and_bind().  Why not save future newbies the bother?
| 
| And break all those poor users who use tab in interactive mode (like *me*)
| to mean tab, not 'complete me please' ? No, please don't do that :)
| 
| -- 
| Thomas Wouters <thomas at xs4all.net>
| 
| Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://mail.python.org/mailman/listinfo/python-dev



From moshez at zadka.site.co.il  Fri Mar 23 21:30:12 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 23 Mar 2001 22:30:12 +0200
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>
References: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>, <E14gWv8-0001OB-00@darjeeling>
Message-ID: <E14gYCK-0001VT-00@darjeeling>

On Fri, 23 Mar 2001 14:20:21 -0500, Guido van Rossum <guido at digicool.com> wrote:

> > >>> a = set([1,2])
> > >>> b = set([1,3])
> > >>> a>b
> > 0
> > >>> a<b
> > 0
> 
> I'd expect both of these to raise an exception.
 
I wouldn't. a>b means "does a contain b". It doesn't.
There *is* a partial order on sets: partial means a<b, a>b, a==b can all
be false, but that there is a meaning for all of them.

FWIW, I'd be for a partial order on complex numbers too 
(a<b iff a.real<b.real and a.imag<b.imag)

> > >>> max(a,b) == a
> > 1
> > 
> > While I'd like
> > 
> > >>> max(a,b) == set([1,2,3])
> > >>> min(a,b) == set([1])
> 
> You shouldn't call that max() or min().

I didn't. Mathematicians do.
The mathematical definition for max() I learned in Calculus 101 was
"the smallest element which is > then all arguments" (hence, properly speaking,
max should also specify the set in which it takes place. Doesn't seem to
matter in real life)

>  These functions are supposed
> to return one of their arguments

Why? 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Fri Mar 23 22:41:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 16:41:14 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: Your message of "Fri, 23 Mar 2001 22:30:12 +0200."
             <E14gYCK-0001VT-00@darjeeling> 
References: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>, <E14gWv8-0001OB-00@darjeeling>  
            <E14gYCK-0001VT-00@darjeeling> 
Message-ID: <200103232141.QAA14771@cj20424-a.reston1.va.home.com>

> > > >>> a = set([1,2])
> > > >>> b = set([1,3])
> > > >>> a>b
> > > 0
> > > >>> a<b
> > > 0
> > 
> > I'd expect both of these to raise an exception.
>  
> I wouldn't. a>b means "does a contain b". It doesn't.
> There *is* a partial order on sets: partial means a<b, a>b, a==b can all
> be false, but that there is a meaning for all of them.

Agreed, you can define < and > any way you want on your sets.  (Why
not <= and >=?  Don't a<b suggest that b has at least one element not
in a?)

> FWIW, I'd be for a partial order on complex numbers too 
> (a<b iff a.real<b.real and a.imag<b.imag)

Where is that useful?  Are there mathematicians who define it this way?

> > > >>> max(a,b) == a
> > > 1
> > > 
> > > While I'd like
> > > 
> > > >>> max(a,b) == set([1,2,3])
> > > >>> min(a,b) == set([1])
> > 
> > You shouldn't call that max() or min().
> 
> I didn't. Mathematicians do.
> The mathematical definition for max() I learned in Calculus 101 was
> "the smallest element which is > then all arguments" (hence, properly speaking,
> max should also specify the set in which it takes place. Doesn't seem to
> matter in real life)

Sorry, mathematicians can overload stuff that you can't in Python.
Write your own operator, function or method to calculate this, just
don't call it max.  And as someone else remarked, a|b and a&b might
already fit this bill.

> >  These functions are supposed
> > to return one of their arguments
> 
> Why?


From tim.one at home.com  Fri Mar 23 22:47:41 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 16:47:41 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <E14gYCK-0001VT-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>

[Moshe]
> The mathematical definition for max() I learned in Calculus 101 was
> "the smallest element which is > then all arguments"

Then I guess American and Dutch calculus are different.  Assuming you meant
to type >=, that's the definition of what we called the "least upper bound"
(or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
called "greatest lower bound" (or "glb") or "infimum".  I've never before
heard max or min used for these.  In lattices, a glb operator is often called
"meet" and a lub operator "join", but again I don't think I've ever seen them
called max or min.

[Guido]
>>  These functions are supposed to return one of their arguments

[Moshe]
> Why?

Because Guido said so <wink>.  Besides, it's apparently the only meaning he
ever heard of; me too.




From esr at thyrsus.com  Fri Mar 23 23:08:52 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 23 Mar 2001 17:08:52 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 04:47:41PM -0500
References: <E14gYCK-0001VT-00@darjeeling> <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>
Message-ID: <20010323170851.A2802@thyrsus.com>

Tim Peters <tim.one at home.com>:
> [Moshe]
> > The mathematical definition for max() I learned in Calculus 101 was
> > "the smallest element which is > then all arguments"
> 
> Then I guess American and Dutch calculus are different.  Assuming you meant
> to type >=, that's the definition of what we called the "least upper bound"
> (or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
> called "greatest lower bound" (or "glb") or "infimum".  I've never before
> heard max or min used for these.  In lattices, a glb operator is often called
> "meet" and a lub operator "join", but again I don't think I've ever seen them
> called max or min.

Eric, speaking as a defrocked mathematician who was at one time rather
intimate with lattice theory, concurs.  However, Tim, I suspect you
will shortly discover that Moshe ain't Dutch.  I didn't ask and I
could be wrong, but at PC9 Moshe's accent and body language fairly
shouted "Israeli" at me.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

[President Clinton] boasts about 186,000 people denied firearms under
the Brady Law rules.  The Brady Law has been in force for three years.  In
that time, they have prosecuted seven people and put three of them in
prison.  You know, the President has entertained more felons than that at
fundraising coffees in the White House, for Pete's sake."
	-- Charlton Heston, FOX News Sunday, 18 May 1997



From tim.one at home.com  Fri Mar 23 23:11:50 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 17:11:50 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <20010323170851.A2802@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPEJHAA.tim.one@home.com>

> Eric, speaking as a defrocked mathematician who was at one time rather
> intimate with lattice theory, concurs.  However, Tim, I suspect you
> will shortly discover that Moshe ain't Dutch.  I didn't ask and I
> could be wrong, but at PC9 Moshe's accent and body language fairly
> shouted "Israeli" at me.

Well, applying Moshe's theory of max to my message, you should have released
that Israeli = max{American, Dutch}.  That is

    Then I guess American and Dutch calculus are different.

was missing

    (from Israeli calculus)

As you'll shortly discover from his temper when his perfidious schemes are
frustrated, Guido is the Dutch guy in this debate <wink>.

although-i-prefer-to-be-thought-of-as-plutonian-ly y'rs  - tim




From guido at digicool.com  Fri Mar 23 23:29:02 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 17:29:02 -0500
Subject: [Python-Dev] Python 2.1b2 released
Message-ID: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>

On time, and with a minimum of fuss, we've released Python 2.1b2.
Thanks again to the many developers who contributed!

Check it out on the Python website:

    http://www.python.org/2.1/

or on SourceForge:

    http://sourceforge.net/project/showfiles.php?group_id=5470&release_id=28334

As it behooves a second beta release, there's no really big news since
2.1b1 was released on March 2:

- Bugs fixed and documentation added. There's now an appendix of the
  Reference Manual documenting nested scopes:

    http://python.sourceforge.net/devel-docs/ref/futures.html

- When nested scopes are enabled by "from __future__ import
  nested_scopes", this also applies to exec, eval() and execfile(),
  and into the interactive interpreter (when using -i).

- Assignment to the internal global variable __debug__ is now illegal.

- unittest.py, a unit testing framework by Steve Purcell (PyUNIT,
  inspired by JUnit), is now part of the standard library.  See the
  PyUnit webpage for documentation:

    http://pyunit.sourceforge.net/

Andrew Kuchling has written (and is continuously updating) an
extensive overview: What's New in Python 2.1:

    http://www.amk.ca/python/2.1/

See also the Release notes posted on SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=28334

We are planning to make the final release of Python 2.1 on April 13;
we may release a release candidate a week earlier.

We're also planning a bugfix release for Python 2.0, dubbed 2.0.1; we
don't have a release schedule for this yet.  We could use a volunteer
to act as the bug release manager!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Sat Mar 24 00:54:19 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 15:54:19 -0800
Subject: [Python-Dev] [Fwd: Python 2.1b2 released]
Message-ID: <3ABBE22B.DBAE4552@ActiveState.com>


-------- Original Message --------
Subject: Python 2.1b2 released
Date: Fri, 23 Mar 2001 17:29:02 -0500
From: Guido van Rossum <guido at digicool.com>
To: python-dev at python.org, Python mailing list
<python-list at python.org>,python-announce at python.org

On time, and with a minimum of fuss, we've released Python 2.1b2.
Thanks again to the many developers who contributed!

Check it out on the Python website:

    http://www.python.org/2.1/

or on SourceForge:

   
http://sourceforge.net/project/showfiles.php?group_id=5470&release_id=28334

As it behooves a second beta release, there's no really big news since
2.1b1 was released on March 2:

- Bugs fixed and documentation added. There's now an appendix of the
  Reference Manual documenting nested scopes:

    http://python.sourceforge.net/devel-docs/ref/futures.html

- When nested scopes are enabled by "from __future__ import
  nested_scopes", this also applies to exec, eval() and execfile(),
  and into the interactive interpreter (when using -i).

- Assignment to the internal global variable __debug__ is now illegal.

- unittest.py, a unit testing framework by Steve Purcell (PyUNIT,
  inspired by JUnit), is now part of the standard library.  See the
  PyUnit webpage for documentation:

    http://pyunit.sourceforge.net/

Andrew Kuchling has written (and is continuously updating) an
extensive overview: What's New in Python 2.1:

    http://www.amk.ca/python/2.1/

See also the Release notes posted on SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=28334

We are planning to make the final release of Python 2.1 on April 13;
we may release a release candidate a week earlier.

We're also planning a bugfix release for Python 2.0, dubbed 2.0.1; we
don't have a release schedule for this yet.  We could use a volunteer
to act as the bug release manager!

--Guido van Rossum (home page: http://www.python.org/~guido/)

-- 
http://mail.python.org/mailman/listinfo/python-list



From paulp at ActiveState.com  Sat Mar 24 01:15:30 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 16:15:30 -0800
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" 
 Comparisons?
References: <E14gYCK-0001VT-00@darjeeling> <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com> <20010323170851.A2802@thyrsus.com>
Message-ID: <3ABBE722.B29684A1@ActiveState.com>

"Eric S. Raymond" wrote:
> 
>...
> 
> Eric, speaking as a defrocked mathematician who was at one time rather
> intimate with lattice theory, concurs.  However, Tim, I suspect you
> will shortly discover that Moshe ain't Dutch.  I didn't ask and I
> could be wrong, but at PC9 Moshe's accent and body language fairly
> shouted "Israeli" at me.

Not to mention his top-level-domain. Sorry, I couldn't resist.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From paulp at ActiveState.com  Sat Mar 24 01:21:10 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 16:21:10 -0800
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" 
 Comparisons?
References: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>
Message-ID: <3ABBE876.8EC91425@ActiveState.com>

Tim Peters wrote:
> 
> [Moshe]
> > The mathematical definition for max() I learned in Calculus 101 was
> > "the smallest element which is > then all arguments"
> 
> Then I guess American and Dutch calculus are different.  Assuming you meant
> to type >=, that's the definition of what we called the "least upper bound"
> (or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
> called "greatest lower bound" (or "glb") or "infimum".  

As long as we're shooting the shit on a Friday afternoon...

http://www.emba.uvm.edu/~read/TI86/maxmin.html
http://www.math.com/tables/derivatives/extrema.htm

Look at that domain name. Are you going to argue with that??? A
corporation dedicated to mathematics?

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From paulp at ActiveState.com  Sat Mar 24 02:16:03 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 17:16:03 -0800
Subject: [Python-Dev] Making types behave like classes
Message-ID: <3ABBF553.274D535@ActiveState.com>

These are some half-baked ideas about getting classes and types to look
more similar. I would like to know whether they are workable or not and
so I present them to the people best equipped to tell me.

Many extension types have a __getattr__ that looks like this:

static PyObject *
Xxo_getattr(XxoObject *self, char *name)
{
	// try to do some work with known attribute names, else:

	return Py_FindMethod(Xxo_methods, (PyObject *)self, name);
}

Py_FindMethod can (despite its name) return any Python object, including
ordinary (non-function) attributes. It also has complete access to the
object's state and type through the self parameter. Here's what we do
today for __doc__:

		if (strcmp(name, "__doc__") == 0) {
			char *doc = self->ob_type->tp_doc;
			if (doc != NULL)
				return PyString_FromString(doc);
		}

Why can't we do this for all magic methods? 

	* __class__ would return for the type object
	* __add__,__len__, __call__, ... would return a method wrapper around
the appropriate slot, 	
	* __init__ might map to a no-op

I think that Py_FindMethod could even implement inheritance between
types if we wanted.

We already do this magic for __methods__ and __doc__. Why not for all of
the magic methods?

Many other types implement no getattr at all (the slot is NULL). In that
case, I think that we have carte blanche to define their getattr
behavior as instance-like as possible.

Finally there are the types with getattrs that do not dispatch to
Py_FindMethod. we can just change those over manually. Extension authors
will do the same when they realize that their types are not inheriting
the features that the other types are.

Benefits:

	* objects based on extension types would "look more like" classes to
Python programmers so there is less confusion about how they are
different

	* users could stop using the type() function to get concrete types and
instead use __class__. After a version or two, type() could be formally
deprecated in favor of isinstance and __class__.

	* we will have started some momentum towards type/class unification
which we could continue on into __setattr__ and subclassing.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From jafo at tummy.com  Sat Mar 24 07:50:08 2001
From: jafo at tummy.com (Sean Reifschneider)
Date: Fri, 23 Mar 2001 23:50:08 -0700
Subject: [Python-Dev] Python 2.1b2 SRPM (was: Re: Python 2.1b2 released)
In-Reply-To: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 23, 2001 at 05:29:02PM -0500
References: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>
Message-ID: <20010323235008.A30668@tummy.com>

Shy of RPMs because of library or other dependancy problems with most of
the RPMs you pick up?  The cure, in my experience is to pick up an SRPM.
All you need to do to build a binary package tailored to your system is run
"rpm --rebuild <packagename>.src.rpm".

I've just put up an SRPM of the 2.1b2 release at:

   ftp://ftp.tummy.com/pub/tummy/RPMS/SRPMS/

Again, this one builds the executable as "python2.1", and can be installed
along-side your normal Python on the system.  Want to check out a great new
feature?  Type "python2.1 /usr/bin/pydoc string".

Download the SRPM from above, and most users can install a binary built
against exactly the set of packages on their system by doing:

   rpm --rebuild python-2.1b2-1tummy.src.rpm
   rpm -i /usr/src/redhat/RPMS/i386/python*2.1b1-1tummy.i386.rpm

Note that this release enables "--with-pymalloc".  If you experience
problems with modules you use, please report the module and how it can be
reproduced so that these issues can be taken care of.

Enjoy,
Sean
-- 
 Total strangers need love, too; and I'm stranger than most.
Sean Reifschneider, Inimitably Superfluous <jafo at tummy.com>
tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python



From moshez at zadka.site.co.il  Sat Mar 24 07:53:03 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 08:53:03 +0200
Subject: [Python-Dev] test_minidom crash
Message-ID: <E14ghv5-0003fu-00@darjeeling>

The bug is in Lib/xml/__init__.py

__version__ = "1.9".split()[1]

I don't know what it was supposed to be, but .split() without an
argument splits on whitespace. best guess is "1.9".split('.') ??

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Sat Mar 24 08:30:47 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 09:30:47 +0200
Subject: [Python-Dev] Py2.1b2/bsddb build problems
Message-ID: <E14giVb-00051a-00@darjeeling>

setup.py needs the following lines:

        if self.compiler.find_library_file(lib_dirs, 'db1'):
            dblib = ['db1']

(right after 

        if self.compiler.find_library_file(lib_dirs, 'db'):
            dblib = ['db'])

To creat bsddb correctly on my system (otherwise it gets installed
but cannot be imported)

I'm using Debian sid 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Sat Mar 24 08:52:28 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 02:52:28 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14ghv5-0003fu-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>

[Moshe Zadka]
> The bug is in Lib/xml/__init__.py
>
> __version__ = "1.9".split()[1]

Believe me, we would not have shipped 2.1b2 if it failed any of the std tests
(and I ran the whole suite 8 ways:  with and without nuking all .pyc/.pyo
files first, with and without -O, and under release and debug builds).

> I don't know what it was supposed to be, but .split() without an
> argument splits on whitespace. best guess is "1.9".split('.') ??

On my box that line is:

__version__ = "$Revision: 1.9 $".split()[1]

So this is this some CVS retrieval screwup?




From moshez at zadka.site.co.il  Sat Mar 24 09:01:44 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 10:01:44 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>
Message-ID: <E14gizY-0005B1-00@darjeeling>

On Sat, 24 Mar 2001 02:52:28 -0500, "Tim Peters" <tim.one at home.com> wrote:
 
> Believe me, we would not have shipped 2.1b2 if it failed any of the std tests
> (and I ran the whole suite 8 ways:  with and without nuking all .pyc/.pyo
> files first, with and without -O, and under release and debug builds).
> 
> > I don't know what it was supposed to be, but .split() without an
> > argument splits on whitespace. best guess is "1.9".split('.') ??
> 
> On my box that line is:
> 
> __version__ = "$Revision: 1.9 $".split()[1]
> 
> So this is this some CVS retrieval screwup?

Probably.
But nobody cares about your machine <1.9 wink>
In the Py2.1b2 you shipped, the line says
'''
__version__ = "1.9".split()[1]
'''
It's line 18.
That, or someone managed to crack one of the routers from SF to me.

should-we-start-signing-our-releases-ly y'rs, Z. 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Sat Mar 24 09:19:20 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 03:19:20 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14gizY-0005B1-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAIJIAA.tim.one@home.com>

> Probably.
> But nobody cares about your machine <1.9 wink>
> In the Py2.1b2 you shipped, the line says
> '''
> __version__ = "1.9".split()[1]
> '''
> It's line 18.

No, in the 2.1b2 I installed on my machine, from the installer I sucked down
from SourceForge, the line is what I said it was:

__version__ = "$Revision: 1.9 $".split()[1]

So you're talking about something else, but I don't know what ...

Ah, OK!  It's that silly source tarball, Python-2.1b2.tgz.  I just sucked
that down from SF, and *that* does have the damaged line just as you say (in
Lib/xml/__init__.py).

I guess we're going to have to wait for Guido to wake up and explain how this
got hosed ... in the meantime, switch to Windows and use a real installer
<wink>.




From martin at loewis.home.cs.tu-berlin.de  Sat Mar 24 09:19:44 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 09:19:44 +0100
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
Message-ID: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>

>> The mathematical definition for max() I learned in Calculus 101 was
>> "the smallest element which is > then all arguments"
>
>Then I guess American and Dutch calculus are different.
[from Israeli calculus]

The missing bit linking the two (sup and max) is

"The supremum of S is equal to its maximum if S possesses a greatest
member."
[http://www.cenius.fsnet.co.uk/refer/maths/articles/s/supremum.html]

So given a subset of a lattice, it may not have a maximum, but it will
always have a supremum. It appears that the Python max function
differs from the mathematical maximum in that respect: max will return
a value, even if that is not the "largest value"; the mathematical
maximum might give no value.

Regards,
Martin




From moshez at zadka.site.co.il  Sat Mar 24 10:13:46 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 11:13:46 +0200
Subject: [Python-Dev] setup.py is too aggressive
Message-ID: <E14gk7G-0005Wh-00@darjeeling>

It seems to me setup.py tries to build libraries even when it's impossible
E.g., I had to add the patch attached so I will get no more ImportErrors
where the module shouts at me that it could not find a symbol.

*** Python-2.1b2/setup.py	Wed Mar 21 09:44:53 2001
--- Python-2.1b2-changed/setup.py	Sat Mar 24 10:49:20 2001
***************
*** 326,331 ****
--- 326,334 ----
              if (self.compiler.find_library_file(lib_dirs, 'ndbm')):
                  exts.append( Extension('dbm', ['dbmmodule.c'],
                                         libraries = ['ndbm'] ) )
+             elif (self.compiler.find_library_file(lib_dirs, 'db1')):
+                 exts.append( Extension('dbm', ['dbmmodule.c'],
+                                        libraries = ['db1'] ) )
              else:
                  exts.append( Extension('dbm', ['dbmmodule.c']) )
  
***************
*** 348,353 ****
--- 351,358 ----
          dblib = []
          if self.compiler.find_library_file(lib_dirs, 'db'):
              dblib = ['db']
+         if self.compiler.find_library_file(lib_dirs, 'db1'):
+             dblib = ['db1']
          
          db185_incs = find_file('db_185.h', inc_dirs,
                                 ['/usr/include/db3', '/usr/include/db2'])

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Sat Mar 24 11:19:15 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 05:19:15 -0500
Subject: [Python-Dev] RE: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEAMJIAA.tim.one@home.com>

[Martin v. Loewis]
> The missing bit linking the two (sup and max) is
>
> "The supremum of S is equal to its maximum if S possesses a greatest
> member."
> [http://www.cenius.fsnet.co.uk/refer/maths/articles/s/supremum.html]
>
> So given a subset of a lattice, it may not have a maximum, but it will
> always have a supremum. It appears that the Python max function
> differs from the mathematical maximum in that respect: max will return
> a value, even if that is not the "largest value"; the mathematical
> maximum might give no value.

Note that the definition of supremum given on that page can't be satisfied in
general for lattices.  For example "x divides y" induces a lattice, where gcd
is the glb and lcm (least common multiple) the lub.  The set {6, 15} then has
lub 30, but is not a supremum under the 2nd clause of that page because 10
divides 30 but neither of {6, 15} (so there's an element "less than" (== that
divides) 30 which no element in the set is "larger than").

So that defn. is suitable for real analysis, but the more general defn. of
sup(S) is simply that X = sup(S) iff X is an upper bound for S (same as the
1st clause on the referenced page), and that every upper bound Y of S is >=
X.  That works for lattices too.

Since Python's max works on sequences, and never terminates given an infinite
sequence, it only makes *sense* to ask what max(S) returns for finite
sequences S.  Under a total ordering, every finite set S has a maximal
element (an element X of S such that for all Y in S Y <= X), and Python's
max(S) does return one.  If there's only a partial ordering, Python's max()
is unpredictable (may or may not blow up, depending on the order the elements
are listed; e.g., [a, b, c] where a<b and c<b but a and c aren't comparable:
in that order, max returns b, but if given in order [a, c, b] max blows up).

Since this is all obvious to the most casual observer <0.9 wink>, it remains
unclear what the brouhaha is about.




From loewis at informatik.hu-berlin.de  Sat Mar 24 13:02:53 2001
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 24 Mar 2001 13:02:53 +0100 (MET)
Subject: [Python-Dev] setup.py is too aggressive
Message-ID: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>

> It seems to me setup.py tries to build libraries even when it's
> impossible E.g., I had to add the patch attached so I will get no
> more ImportErrors where the module shouts at me that it could not
> find a symbol.

The more general problem here is that building of a module may fail:
Even if a library is detected correctly, it might be that additional
libraries are needed. In some cases, it helps to put the correct
module line into Modules/Setup (which would have helped in your case);
then setup.py will not attempt to build the module.

However, there may be cases where a module cannot be build at all:
either some libraries are missing, or the module won't work on the
system for some other reason (e.g. since the system library it relies
on has some bug).

There should be a mechanism to tell setup.py not to build a module at
all. Since it is looking into Modules/Setup anyway, perhaps a

*excluded*
dbm

syntax in Modules/Setup would be appropriate? Of course, makesetup
needs to be taught such a syntax. Alternatively, an additional
configuration file or command line options might work.

In any case, distributors are certainly advised to run the testsuite
and potentially remove or fix modules for which the tests fail.

Regards,
Martin



From moshez at zadka.site.co.il  Sat Mar 24 13:09:04 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 14:09:04 +0200
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
Message-ID: <E14gmqu-0006Ex-00@darjeeling>

On Sat, 24 Mar 2001, Martin von Loewis <loewis at informatik.hu-berlin.de> wrote:

> In any case, distributors are certainly advised to run the testsuite
> and potentially remove or fix modules for which the tests fail.

These, however, aren't flagged as failures -- they're flagged as
ImportErrors which are ignored during tests
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From loewis at informatik.hu-berlin.de  Sat Mar 24 13:23:47 2001
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 24 Mar 2001 13:23:47 +0100 (MET)
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <E14gmqu-0006Ex-00@darjeeling> (message from Moshe Zadka on Sat,
	24 Mar 2001 14:09:04 +0200)
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de> <E14gmqu-0006Ex-00@darjeeling>
Message-ID: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>

> > In any case, distributors are certainly advised to run the testsuite
> > and potentially remove or fix modules for which the tests fail.
> 
> These, however, aren't flagged as failures -- they're flagged as
> ImportErrors which are ignored during tests

I see. Is it safe to say, for all modules in the core, that importing
them has no "dangerous" side effect? In that case, setup.py could
attempt to import them after they've been build, and delete the ones
that fail to import. Of course, that would also delete modules where
setting LD_LIBRARY_PATH might cure the problem...

Regards,
Martin



From moshez at zadka.site.co.il  Sat Mar 24 13:24:48 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 14:24:48 +0200
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>
References: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>, <200103241202.NAA19000@pandora.informatik.hu-berlin.de> <E14gmqu-0006Ex-00@darjeeling>
Message-ID: <E14gn68-0006Jk-00@darjeeling>

On Sat, 24 Mar 2001, Martin von Loewis <loewis at informatik.hu-berlin.de> wrote:

> I see. Is it safe to say, for all modules in the core, that importing
> them has no "dangerous" side effect? In that case, setup.py could
> attempt to import them after they've been build, and delete the ones
> that fail to import. Of course, that would also delete modules where
> setting LD_LIBRARY_PATH might cure the problem...

So people who build will have to set LD_LIB_PATH too. I don't see a problem
with that...
(particularily since this will mean only that if the tests pass, only modules
which were tested will be installed, theoretically...)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Sat Mar 24 14:10:21 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 08:10:21 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 08:53:03 +0200."
             <E14ghv5-0003fu-00@darjeeling> 
References: <E14ghv5-0003fu-00@darjeeling> 
Message-ID: <200103241310.IAA21370@cj20424-a.reston1.va.home.com>

> The bug is in Lib/xml/__init__.py
> 
> __version__ = "1.9".split()[1]
> 
> I don't know what it was supposed to be, but .split() without an
> argument splits on whitespace. best guess is "1.9".split('.') ??

This must be because I used "cvs export -kv" to create the tarball
this time.  This may warrant a release update :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)



From ping at lfw.org  Sat Mar 24 14:33:05 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 24 Mar 2001 05:33:05 -0800 (PST)
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>
Message-ID: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>

On Sat, 24 Mar 2001, Martin v. Loewis wrote:
> So given a subset of a lattice, it may not have a maximum, but it will
> always have a supremum. It appears that the Python max function
> differs from the mathematical maximum in that respect: max will return
> a value, even if that is not the "largest value"; the mathematical
> maximum might give no value.

Ah, but in Python most collections are usually finite. :)


-- ?!ng




From guido at digicool.com  Sat Mar 24 14:33:59 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 08:33:59 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 08:53:03 +0200."
             <E14ghv5-0003fu-00@darjeeling> 
References: <E14ghv5-0003fu-00@darjeeling> 
Message-ID: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>

OK, here's what I've done.  I've done a new cvs export of the r21b2
tag, this time *without* specifying -kv.  I've tarred it up and
uploaded it to SF and python.org.  The new tarball is called
Python-2.1b2a.tgz to distinguish it from the broken one.  I've removed
the old, broken tarball, and added a note to the python.org/2.1/ page
about the new tarball.

Background:

"cvs export -kv" changes all CVS version insertions from "$Release:
1.9$" to "1.9".  (It affects other CVS inserts too.)  This is so that
the versions don't get changed when someone else incorporates it into
their own CVS tree, which used to be a common usage pattern.

The question is, should we bother to make the code robust under
releases with -kv or not?  I used to write code that dealt with the
fact that __version__ could be either "$Release: 1.9$" or "1.9", but
clearly that bit of arcane knowledge got lost.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From gmcm at hypernet.com  Sat Mar 24 14:46:33 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sat, 24 Mar 2001 08:46:33 -0500
Subject: [Python-Dev] Making types behave like classes
In-Reply-To: <3ABBF553.274D535@ActiveState.com>
Message-ID: <3ABC5EE9.2943.14C818C7@localhost>

[Paul Prescod]
> These are some half-baked ideas about getting classes and types
> to look more similar. I would like to know whether they are
> workable or not and so I present them to the people best equipped
> to tell me.

[expand Py_FindMethod's actions]

>  * __class__ would return for the type object
>  * __add__,__len__, __call__, ... would return a method wrapper
>  around
> the appropriate slot, 	
>  * __init__ might map to a no-op
> 
> I think that Py_FindMethod could even implement inheritance
> between types if we wanted.
> 
> We already do this magic for __methods__ and __doc__. Why not for
> all of the magic methods?

Those are introspective; typically read in the interactive 
interpreter. I can't do anything with them except read them.

If you wrap, eg, __len__, what can I do with it except call it? I 
can already do that with len().

> Benefits:
> 
>  * objects based on extension types would "look more like"
>  classes to
> Python programmers so there is less confusion about how they are
> different

I think it would probably enhance confusion to have the "look 
more like" without "being more like".
 
>  * users could stop using the type() function to get concrete
>  types and
> instead use __class__. After a version or two, type() could be
> formally deprecated in favor of isinstance and __class__.

__class__ is a callable object. It has a __name__. From the 
Python side, a type isn't much more than an address. Until 
Python's object model is redone, there are certain objects for 
which type(o) and o.__class__ return quite different things.
 
>  * we will have started some momentum towards type/class
>  unification
> which we could continue on into __setattr__ and subclassing.

The major lesson I draw from ExtensionClass and friends is 
that achieving this behavior in today's Python is horrendously 
complex and fragile. Until we can do it right, I'd rather keep it 
simple (and keep the warts on the surface).

- Gordon



From moshez at zadka.site.co.il  Sat Mar 24 14:45:32 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 15:45:32 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>
References: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>
Message-ID: <E14goMG-0006bL-00@darjeeling>

On Sat, 24 Mar 2001 08:33:59 -0500, Guido van Rossum <guido at digicool.com> wrote:

> OK, here's what I've done.  I've done a new cvs export of the r21b2
> tag, this time *without* specifying -kv.

This was clearly the solution to *this* problem ;-)
"No code changes in CVS between the same release" sounds like a good
rule.

> The question is, should we bother to make the code robust under
> releases with -kv or not?

Yes.
People *will* be incorporating Python into their own CVS trees. FreeBSD
does it with ports, and Debian are thinking of moving in this direction,
and some Debian maintainers already do that with upstream packages --
Python might be handled like that too.

The only problem I see if that we need to run the test-suite with a -kv'less
export. Fine, this should be part of the release procedure. 
I just went through the core grepping for '$Revision' and it seems this
is the only place this happens -- all the other places either put the default
version (RCS cruft and all), or are smart about handling it.

Since "smart" means just
__version__ = [part for part in "$Revision$".split() if '$' not in part][0]
We can just mandate that, and be safe.

However, whatever we do the Windows build and the UNIX build must be the
same.
I think it should be possible to build the Windows version from the .tgz
and that is what (IMHO) should happen, instead of Tim and Guido exporting
from the CVS independantly. This would stop problems like the one
Tim and I had this (my time) morning.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Sat Mar 24 16:34:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 10:34:13 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 15:45:32 +0200."
             <E14goMG-0006bL-00@darjeeling> 
References: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>  
            <E14goMG-0006bL-00@darjeeling> 
Message-ID: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>

> People *will* be incorporating Python into their own CVS trees. FreeBSD
> does it with ports, and Debian are thinking of moving in this direction,
> and some Debian maintainers already do that with upstream packages --
> Python might be handled like that too.

I haven't seen *any* complaints about this, so is it possible that
they don't mind having the $Revision: ... $ strings in there?

> The only problem I see if that we need to run the test-suite with a
> -kv'less export.  Fine, this should be part of the release
> procedure.  I just went through the core grepping for '$Revision'
> and it seems this is the only place this happens -- all the other
> places either put the default version (RCS cruft and all), or are
> smart about handling it.

Hm.  This means that the -kv version gets *much* less testing than the
regular checkout version.  I've done this before in the past with
other projects and I remember that the bugs produced by this kind of
error are very subtle and not always caught by the test suite.

So I'm skeptical.

> Since "smart" means just
> __version__ = [part for part in "$Revision$".split() if '$' not in part][0]
> We can just mandate that, and be safe.

This is less typing, and no more obscure, and seems to work just as
well given that the only two inputs are "$Revision: 1.9 $" or "1.9":

    __version__ = "$Revision: 1.9 $".split()[-2:][0]

> However, whatever we do the Windows build and the UNIX build must be the
> same.

That's hard right there -- we currently build the Windows compiler
right out of the CVS tree.

> I think it should be possible to build the Windows version from the .tgz
> and that is what (IMHO) should happen, instead of Tim and Guido exporting
> from the CVS independantly. This would stop problems like the one
> Tim and I had this (my time) morning.

Who are you telling us how to work?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Sat Mar 24 16:41:10 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 17:41:10 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>
References: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>, <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>  
            <E14goMG-0006bL-00@darjeeling>
Message-ID: <E14gqAA-0006uP-00@darjeeling>

On Sat, 24 Mar 2001 10:34:13 -0500, Guido van Rossum <guido at digicool.com> wrote:

> I haven't seen *any* complaints about this, so is it possible that
> they don't mind having the $Revision: ... $ strings in there?

I don't know.
Like I said, my feelings about that are not very strong...

> > I think it should be possible to build the Windows version from the .tgz
> > and that is what (IMHO) should happen, instead of Tim and Guido exporting
> > from the CVS independantly. This would stop problems like the one
> > Tim and I had this (my time) morning.
> 
> Who are you telling us how to work?

I said "I think" and "IMHO", so I'm covered. I was only giving suggestions.
You're free to ignore them if you think my opinion is without merit.
I happen to think otherwise <8am wink>, but you're the BDFL and I'm not.
Are you saying it's not important to you that the .py's in Windows and
UNIX are the same?
I think it should be a priority, given that when people complain about
OS-independant problems, they often neglect to mention the OS.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From martin at loewis.home.cs.tu-berlin.de  Sat Mar 24 17:49:10 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 17:49:10 +0100
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>
	(message from Ka-Ping Yee on Sat, 24 Mar 2001 05:33:05 -0800 (PST))
References: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>
Message-ID: <200103241649.f2OGnAa04582@mira.informatik.hu-berlin.de>

> On Sat, 24 Mar 2001, Martin v. Loewis wrote:
> > So given a subset of a lattice, it may not have a maximum, but it will
> > always have a supremum. It appears that the Python max function
> > differs from the mathematical maximum in that respect: max will return
> > a value, even if that is not the "largest value"; the mathematical
> > maximum might give no value.
> 
> Ah, but in Python most collections are usually finite. :)

Even  a  finite collection  may  not  have  a maximum,  which  Moshe's
original example illustrates:

s1 = set(1,4,5)
s2 = set(4,5,6)

max([s1,s2]) == ???

With respect to the subset relation, the collection [s1,s2] has no
maximum; its supremum is set(1,4,5,6). A maximum is only guaranteed to
exist for a finite collection if the order is total.

Regards,
Martin



From barry at digicool.com  Sat Mar 24 18:19:20 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 12:19:20 -0500
Subject: [Python-Dev] test_minidom crash
References: <E14ghv5-0003fu-00@darjeeling>
	<200103241310.IAA21370@cj20424-a.reston1.va.home.com>
Message-ID: <15036.55064.497185.806163@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    >> The bug is in Lib/xml/__init__.py __version__ =
    >> "1.9".split()[1] I don't know what it was supposed to be, but
    >> .split() without an argument splits on whitespace. best guess
    >> is "1.9".split('.') ??

    GvR> This must be because I used "cvs export -kv" to create the
    GvR> tarball this time.  This may warrant a release update :-(

Using "cvs export -kv" is a Good Idea for a release!  That's because
if others import the release into their own CVS, or pull the file into
an unrelated CVS repository, your revision numbers are preserved.

I haven't followed this thread very carefully, but isn't there a
better way to fix the problem rather than stop using -kv (I'm not sure
that's what Guido has in mind)?

-Barry



From martin at loewis.home.cs.tu-berlin.de  Sat Mar 24 18:30:46 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 18:30:46 +0100
Subject: [Python-Dev] test_minidom crash
Message-ID: <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de>

[Moshe]
> I just went through the core grepping for '$Revision' and it seems
> this is the only place this happens -- all the other places either
> put the default version (RCS cruft and all), or are smart about
> handling it.

You have not search carefully enough. pyexpat.c has

    char *rev = "$Revision: 2.44 $";
...
    PyModule_AddObject(m, "__version__",
                       PyString_FromStringAndSize(rev+11, strlen(rev+11)-2));

> I haven't seen *any* complaints about this, so is it possible that
> they don't mind having the $Revision: ... $ strings in there?

The problem is that they don't know the problems they run into
(yet). E.g. if they import pyexpat.c into their tree, they get
1.1.1.1; even after later imports, they still get 1.x. Now, PyXML
currently decides that the Python pyexpat is not good enough if it is
older than 2.39. In turn, they might get different code being used
when installing out of their CVS as compared to installing from the
source distributions.

That all shouldn't cause problems, but it would probably help if
source releases continue to use -kv; then likely every end-user will
get the same sources. I'd volunteer to review the core sources (and
produce patches) if that is desired.

Regards,
Martin



From barry at digicool.com  Sat Mar 24 18:33:47 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 12:33:47 -0500
Subject: [Python-Dev] test_minidom crash
References: <E14ghv5-0003fu-00@darjeeling>
	<200103241333.IAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <15036.55931.367420.983599@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> The question is, should we bother to make the code robust
    GvR> under releases with -kv or not?

Yes.
    
    GvR> I used to write code that dealt with the fact that
    GvR> __version__ could be either "$Release: 1.9$" or "1.9", but
    GvR> clearly that bit of arcane knowledge got lost.

Time to re-educate then!

On the one hand, I personally try to avoid assigning __version__ from
a CVS revision number because I'm usually interested in a more
confederated release.  I.e. mimelib 0.2 as opposed to
mimelib/mimelib/__init__.py revision 1.4.  If you want the CVS
revision of the file to be visible in the file, use a different global
variable, or stick it in a comment and don't worry about sucking out
just the numbers.

OTOH, I understand this is a convenient way to not have to munge
version numbers so lots of people do it (I guess).

Oh, I see there are other followups to this thread, so I'll shut up
now.  I think Guido's split() idiom is the Right Thing To Do; it works
with branch CVS numbers too:

>>> "$Revision: 1.9.4.2 $".split()[-2:][0]
'1.9.4.2'
>>> "1.9.4.2".split()[-2:][0]
'1.9.4.2'

-Barry



From guido at digicool.com  Sat Mar 24 19:13:45 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 13:13:45 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 12:19:20 EST."
             <15036.55064.497185.806163@anthem.wooz.org> 
References: <E14ghv5-0003fu-00@darjeeling> <200103241310.IAA21370@cj20424-a.reston1.va.home.com>  
            <15036.55064.497185.806163@anthem.wooz.org> 
Message-ID: <200103241813.NAA27426@cj20424-a.reston1.va.home.com>

> Using "cvs export -kv" is a Good Idea for a release!  That's because
> if others import the release into their own CVS, or pull the file into
> an unrelated CVS repository, your revision numbers are preserved.

I know, but I doubt that htis is used much any more.  I haven't had
any complaints about this, and I know that we didn't use -kv for
previous releases (I checked 1.5.2, 1.6 and 2.0).

> I haven't followed this thread very carefully, but isn't there a
> better way to fix the problem rather than stop using -kv (I'm not sure
> that's what Guido has in mind)?

Well, if we only us -kv to create the final tarball and installer, and
everybody else uses just the CVS version, the problem is that we don't
have enough testing time in.

Given that most code is written to deal with "$Revision: 1.9 $", why
bother breaking it?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Sat Mar 24 19:14:51 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 13:14:51 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 18:30:46 +0100."
             <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de> 
References: <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de> 
Message-ID: <200103241814.NAA27441@cj20424-a.reston1.va.home.com>

> That all shouldn't cause problems, but it would probably help if
> source releases continue to use -kv; then likely every end-user will
> get the same sources. I'd volunteer to review the core sources (and
> produce patches) if that is desired.

I'm not sure if it's a matter of "continue to use" -- as I said, 1.5.2
and later releases haven't used -kv.

Nevertheless, patches to fix this will be most welcome.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Sat Mar 24 21:49:46 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 15:49:46 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14goMG-0006bL-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEBMJIAA.tim.one@home.com>

[Moshe]
> ...
> I just went through the core grepping for '$Revision' and it seems
> this is the only place this happens -- all the other places either put
> the default version (RCS cruft and all), or are smart about handling it.

Hmm.  Unless it's in a *comment*, I expect most uses are dubious.  Clear
example, from the new Lib/unittest.py:

__version__ = "$Revision: 1.2 $"[11:-2]

Presumably that's yielding an empty string under the new tarball release.

One of a dozen fuzzy examples, from pickle.py:

__version__ = "$Revision: 1.46 $"       # Code version

The module makes no other use of this, and since it's not in a comment I have
to presume that the author *intended* clients to access pickle.__version__
directly.  But, if so, they've been getting the $Revision business for years,
so changing the released format now could break users' code.

> ...
> However, whatever we do the Windows build and the UNIX build must be
> the same.

*Sounds* good <wink>.

> I think it should be possible to build the Windows version from the
> .tgz and that is what (IMHO) should happen, instead of Tim and Guido
> exporting from the CVS independantly. This would stop problems like the
> one Tim and I had this (my time) morning.

Ya, sounds good too.  A few things against it:  The serialization would add
hours to the release process, in part because I get a lot of testing done
now, on the Python I install *from* the Windows installer I build, while the
other guys are finishing the .tgz business (note that Guido doesn't similarly
run tests on a Python built from the tarball, else he would have caught this
problem before you!).

Also in part because the Windows installer is not a simple packaging of the
source tree:  the Windows version also ships with pre-compiled components for
Tcl/Tk, zlib, bsddb and pyexpat.  The source for that stuff doesn't come in
the tarball; it has to be sprinkled "by hand" into the source tree.

The last gets back to Guido's point, which is also a good one:  if the
Windows release gets built from a tree I've used for the very first time a
couple hours before the release, the higher the odds that a process screwup
gets overlooked.

To date, there have been no "process bugs" in the Windows build process, and
I'd be loathe to give that up.  Building from the tree I use every day is ...
reassuring.

At heart, I don't much like the idea of using source revision numbers as code
version numbers anyway -- "New and Improved!  Version 1.73 stripped a
trailing space from line 239!" <wink>.

more-info-than-anyone-needs-to-know-ly y'rs  - tim




From paul at pfdubois.com  Sat Mar 24 23:14:03 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Sat, 24 Mar 2001 14:14:03 -0800
Subject: [Python-Dev] distutils change breaks code, Pyfort
Message-ID: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>

The requirement of a version argument to the distutils command breaks Pyfort
and many of my existing packages. These packages are not intended for use
with the distribution commands and a package version number would be
meaningless.

I will make a new Pyfort that supplies a version number to the call it makes
to setup. However, I think this change to distutils is a poor idea. If the
version number would be required for the distribution commands, let *them*
complain, perhaps by setting a default value of time.asctime(time.gmtime())
or something that the distribution commands could object to.

I apologize if I missed an earlier discussion of this change that seems to
be in 2.1b2 but not 2.1b1, as I am new to this list.

Paul





From jafo at tummy.com  Sun Mar 25 00:17:35 2001
From: jafo at tummy.com (Sean Reifschneider)
Date: Sat, 24 Mar 2001 16:17:35 -0700
Subject: [Python-Dev] RFC: PEP243: Module Repository Upload Mechanism
Message-ID: <20010324161735.A19818@tummy.com>

Included below is the version of PEP243 after it's initial round of review.
I welcome any feedback.

Thanks,
Sean

============================================================================
PEP: 243
Title: Module Repository Upload Mechanism
Version: $Revision$
Author: jafo-pep at tummy.com (Sean Reifschneider)
Status: Draft
Type: Standards Track
Created: 18-Mar-2001
Python-Version: 2.1
Post-History: 
Discussions-To: distutils-sig at python.org


Abstract

    For a module repository system (such as Perl's CPAN) to be
    successful, it must be as easy as possible for module authors to
    submit their work.  An obvious place for this submit to happen is
    in the Distutils tools after the distribution archive has been
    successfully created.  For example, after a module author has
    tested their software (verifying the results of "setup.py sdist"),
    they might type "setup.py sdist --submit".  This would flag
    Distutils to submit the source distribution to the archive server
    for inclusion and distribution to the mirrors.

    This PEP only deals with the mechanism for submitting the software
    distributions to the archive, and does not deal with the actual
    archive/catalog server.


Upload Process

    The upload will include the Distutils "PKG-INFO" meta-data
    information (as specified in PEP-241 [1]), the actual software
    distribution, and other optional information.  This information
    will be uploaded as a multi-part form encoded the same as a
    regular HTML file upload request.  This form is posted using
    ENCTYPE="multipart/form-data" encoding [RFC1867].

    The upload will be made to the host "modules.python.org" on port
    80/tcp (POST http://modules.python.org:80/swalowpost.cgi).  The form
    will consist of the following fields:

        distribution -- The file containing the module software (for
        example, a .tar.gz or .zip file).

        distmd5sum -- The MD5 hash of the uploaded distribution,
        encoded in ASCII representing the hexadecimal representation
        of the digest ("for byte in digest: s = s + ('%02x' %
        ord(byte))").

        pkginfo (optional) -- The file containing the distribution
        meta-data (as specified in PEP-241 [1]).  Note that if this is not
        included, the distribution file is expected to be in .tar format
        (gzipped and bzipped compreesed are allowed) or .zip format, with a
        "PKG-INFO" file in the top-level directory it extracts
        ("package-1.00/PKG-INFO").

        infomd5sum (required if pkginfo field is present) -- The MD5 hash
        of the uploaded meta-data, encoded in ASCII representing the
        hexadecimal representation of the digest ("for byte in digest:
        s = s + ('%02x' % ord(byte))").

        platform (optional) -- A string representing the target
        platform for this distribution.  This is only for binary
        distributions.  It is encoded as
        "<os_name>-<os_version>-<platform architecture>-<python
        version>".

        signature (optional) -- A OpenPGP-compatible signature [RFC2440]
        of the uploaded distribution as signed by the author.  This may be
        used by the cataloging system to automate acceptance of uploads.

        protocol_version -- A string indicating the protocol version that
        the client supports.  This document describes protocol version "1".


Return Data

    The status of the upload will be reported using HTTP non-standard
    ("X-*)" headers.  The "X-Swalow-Status" header may have the following
    values:

        SUCCESS -- Indicates that the upload has succeeded.

        FAILURE -- The upload is, for some reason, unable to be
        processed.

        TRYAGAIN -- The server is unable to accept the upload at this
        time, but the client should try again at a later time.
        Potential causes of this are resource shortages on the server,
        administrative down-time, etc...

    Optionally, there may be a "X-Swalow-Reason" header which includes a
    human-readable string which provides more detailed information about
    the "X-Swalow-Status".

    If there is no "X-Swalow-Status" header, or it does not contain one of
    the three strings above, it should be treated as a temporary failure.

    Example:

        >>> f = urllib.urlopen('http://modules.python.org:80/swalowpost.cgi')
        >>> s = f.headers['x-swalow-status']
        >>> s = s + ': ' + f.headers.get('x-swalow-reason', '<None>')
        >>> print s
        FAILURE: Required field "distribution" missing.


Sample Form

    The upload client must submit the page in the same form as
    Netscape Navigator version 4.76 for Linux produces when presented
    with the following form:

        <H1>Upload file</H1>
        <FORM NAME="fileupload" METHOD="POST" ACTION="swalowpost.cgi"
              ENCTYPE="multipart/form-data">
        <INPUT TYPE="file" NAME="distribution"><BR>
        <INPUT TYPE="text" NAME="distmd5sum"><BR>
        <INPUT TYPE="file" NAME="pkginfo"><BR>
        <INPUT TYPE="text" NAME="infomd5sum"><BR>
        <INPUT TYPE="text" NAME="platform"><BR>
        <INPUT TYPE="text" NAME="signature"><BR>
        <INPUT TYPE="hidden" NAME="protocol_version" VALUE="1"><BR>
        <INPUT TYPE="SUBMIT" VALUE="Upload">
        </FORM>


Platforms

    The following are valid os names:

        aix beos debian dos freebsd hpux mac macos mandrake netbsd
        openbsd qnx redhat solaris suse windows yellowdog

    The above include a number of different types of distributions of
    Linux.  Because of versioning issues these must be split out, and
    it is expected that when it makes sense for one system to use
    distributions made on other similar systems, the download client
    will make the distinction.

    Version is the official version string specified by the vendor for
    the particular release.  For example, "2000" and "nt" (Windows),
    "9.04" (HP-UX), "7.0" (RedHat, Mandrake).

    The following are valid architectures:

        alpha hppa ix86 powerpc sparc ultrasparc


Status

    I currently have a proof-of-concept client and server implemented.
    I plan to have the Distutils patches ready for the 2.1 release.
    Combined with Andrew's PEP-241 [1] for specifying distribution
    meta-data, I hope to have a platform which will allow us to gather
    real-world data for finalizing the catalog system for the 2.2
    release.


References

    [1] Metadata for Python Software Package, Kuchling,
        http://python.sourceforge.net/peps/pep-0241.html

    [RFC1867] Form-based File Upload in HTML
        http://www.faqs.org/rfcs/rfc1867.html

    [RFC2440] OpenPGP Message Format
        http://www.faqs.org/rfcs/rfc2440.html


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:
-- 
 A smart terminal is not a smart*ass* terminal, but rather a terminal
 you can educate.  -- Rob Pike
Sean Reifschneider, Inimitably Superfluous <jafo at tummy.com>
tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python



From martin at loewis.home.cs.tu-berlin.de  Sun Mar 25 01:47:26 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 25 Mar 2001 01:47:26 +0100
Subject: [Python-Dev] distutils change breaks code, Pyfort
Message-ID: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>

> The  requirement of  a  version argument  to  the distutils  command
> breaks Pyfort and  many of my existing packages.  These packages are
> not intended  for use with  the distribution commands and  a package
> version number would be meaningless.

So  this  is  clearly  an  incompatible  change.  According  with  the
procedures in PEP 5, there  should be a warning issued before aborting
setup. Later  (major) releases of  Python, or distutils,  could change
the warning into an error.

Nevertheless, I agree with the  change in principal. Distutils can and
should  enforce a  certain  amount  of policy;  among  this, having  a
version number sounds like a  reasonable requirement - even though its
primary  use is for  building (and  uploading) distributions.  Are you
saying that  Pyfort does not have a  version number? On SF,  I can get
version 6.3...

Regards,
Martin



From paul at pfdubois.com  Sun Mar 25 03:43:52 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Sat, 24 Mar 2001 17:43:52 -0800
Subject: [Python-Dev] RE: distutils change breaks code, Pyfort
In-Reply-To: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>
Message-ID: <ADEOIFHFONCLEEPKCACCAEDNCHAA.paul@pfdubois.com>

Pyfort is the kind of package the change was intended for, and it does have
a version number. But I have other packages, that cannot stand on their own,
that are part of a bigger suite of packages, and dist is never going to be
used. They don't have a MANIFEST, etc. The setup.py file is used instead of
a Makefile. I don't think that it is logical to require a version number
that is not used in that case. We also raise the "entry fee" for learning to
use Distutils or starting a new package.

In the case of Pyfort there is NO setup.py, it is just running a command on
the fly. But I've already fixed it with version 6.3.

I think we have all focused on the public distribution problem but in fact
Distutils is just great as an internal tool for building large software
projects and that is how I use it. I agree that if I want to use sdist,
bdist etc. that I need to set the version. But then, I need to do other
things too in that case.

-----Original Message-----
From: Martin v. Loewis [mailto:martin at loewis.home.cs.tu-berlin.de]
Sent: Saturday, March 24, 2001 4:47 PM
To: paul at pfdubois.com
Cc: python-dev at python.org
Subject: distutils change breaks code, Pyfort


> The  requirement of  a  version argument  to  the distutils  command
> breaks Pyfort and  many of my existing packages.  These packages are
> not intended  for use with  the distribution commands and  a package
> version number would be meaningless.

So  this  is  clearly  an  incompatible  change.  According  with  the
procedures in PEP 5, there  should be a warning issued before aborting
setup. Later  (major) releases of  Python, or distutils,  could change
the warning into an error.

Nevertheless, I agree with the  change in principal. Distutils can and
should  enforce a  certain  amount  of policy;  among  this, having  a
version number sounds like a  reasonable requirement - even though its
primary  use is for  building (and  uploading) distributions.  Are you
saying that  Pyfort does not have a  version number? On SF,  I can get
version 6.3...

Regards,
Martin




From barry at digicool.com  Sun Mar 25 05:06:21 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 22:06:21 -0500
Subject: [Python-Dev] RE: distutils change breaks code, Pyfort
References: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>
	<ADEOIFHFONCLEEPKCACCAEDNCHAA.paul@pfdubois.com>
Message-ID: <15037.24749.117157.228368@anthem.wooz.org>

>>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:

    PFD> I think we have all focused on the public distribution
    PFD> problem but in fact Distutils is just great as an internal
    PFD> tool for building large software projects and that is how I
    PFD> use it.

I've used it this way too, and you're right, it's great for this.
Esp. for extensions, it's much nicer than fiddling with
Makefile.pre.in's etc.  So I think I agree with you about the version
numbers and other required metadata -- or at least, there should be an
escape.

-Barry



From tim.one at home.com  Sun Mar 25 07:07:20 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 25 Mar 2001 00:07:20 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010321214432.A25810@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>

[Neil Schemenauer]
> Apparently they [Icon-style generators] are good for lots of other
> things too.  Tonight I implemented passing values using resume().
>  Next, I decided to see if I had enough magic juice to tackle the
> coroutine example from Gordon's stackless tutorial.  Its turns out
> that I didn't need the extra functionality.  Generators are enough.
>
> The code is not too long so I've attached it.  I figure that some
> people might need a break from 2.1 release issues.

I'm afraid we were buried alive under them at the time, and I don't want this
one to vanish in the bit bucket!

> I think the generator version is even simpler than the coroutine
> version.
>
> [Example code for the Dahl/Hoare "squasher" program elided -- see
>  the archive]

This raises a potentially interesting point:  is there *any* application of
coroutines for which simple (yield-only-to-immediate-caller) generators
wouldn't suffice, provided that they're explicitly resumable?

I suspect there isn't.  If you give me a coroutine program, and let me add a
"control loop", I can:

1. Create an Icon-style generator for each coroutine "before the loop".

2. Invoke one of the coroutines "before the loop".

3. Replace each instance of

       coroutine_transfer(some_other_coroutine, some_value)

   within the coroutines by

       yield some_other_coroutine, some_value

4. The "yield" then returns to the control loop, which picks apart
   the tuple to find the next coroutine to resume and the value to
   pass to it.

This starts to look a lot like uthreads, but built on simple generator
yield/resume.

It loses some things:

A. Coroutine A can't *call* routine B and have B do a co-transfer
   directly.  But A *can* invoke B as a generator and have B yield
   back to A, which in turn yields back to its invoker ("the control
   loop").

B. As with recursive Icon-style generators, a partial result generated
   N levels deep in the recursion has to suspend its way thru N
   levels of frames, and resume its way back down N levels of frames
   to get moving again.  Real coroutines can transmit results directly
   to the ultimate consumer.

OTOH, it may gain more than it loses:

A. Simple to implement in CPython without threads, and at least
   possible likewise even for Jython.

B. C routines "in the middle" aren't necessarily show-stoppers.  While
   they can't exploit Python's implementation of generators directly,
   they *could* participate in the yield/resume *protocol*, acting "as
   if" they were Python routines.  Just like Python routines have to
   do today, C routines would have to remember their own state and
   arrange to save/restore it appropriately across calls (but to the
   C routines, they *are* just calls and returns, and nothing trickier
   than that -- their frames truly vanish when "suspending up", so
   don't get in the way).

the-meek-shall-inherit-the-earth<wink>-ly y'rs  - tim




From nas at arctrix.com  Sun Mar 25 07:47:48 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Sat, 24 Mar 2001 21:47:48 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>; from tim.one@home.com on Sun, Mar 25, 2001 at 12:07:20AM -0500
References: <20010321214432.A25810@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>
Message-ID: <20010324214748.A32161@glacier.fnational.com>

On Sun, Mar 25, 2001 at 12:07:20AM -0500, Tim Peters wrote:
> If you give me a coroutine program, and let me add a "control
> loop", ...

This is exactly what I started doing when I was trying to rewrite
your Coroutine.py module to use generators.

> A. Simple to implement in CPython without threads, and at least
>    possible likewise even for Jython.

I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
and frame.resume() low level interface is nice.  I think Jython
must know which frames are going to be suspended at compile time.
That makes it hard to build higher level control abstractions.  I
don't know much about Jython though so maybe there's another way.
In any case it should be possible to use threads to implement
some common higher level interfaces.

  Neil



From tim.one at home.com  Sun Mar 25 08:11:58 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 25 Mar 2001 01:11:58 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010324214748.A32161@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>

[Tim]
>> If you give me a coroutine program, and let me add a "control
>> loop", ...

[Neil Schemenauer]
> This is exactly what I started doing when I was trying to rewrite
> your Coroutine.py module to use generators.

Ya, I figured as much -- for a Canadian, you don't drool much <wink>.

>> A. Simple to implement in CPython without threads, and at least
>>    possible likewise even for Jython.

> I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
> and frame.resume() low level interface is nice.  I think Jython
> must know which frames are going to be suspended at compile time.

Yes, Samuele said as much.  My belief is that generators don't become *truly*
pleasant unless "yield" ("suspend"; whatever) is made a new statement type.
Then Jython knows exactly where yields can occur.  As in CLU (but not Icon),
it would also be fine by me if routines *used* as generators also needed to
be explicitly marked as such (this is a non-issue in Icon because *every*
Icon expression "is a generator" -- there is no other kind of procedure
there).

> That makes it hard to build higher level control abstractions.
> I don't know much about Jython though so maybe there's another way.
> In any case it should be possible to use threads to implement
> some common higher level interfaces.

What I'm wondering is whether I care <0.4 wink>.  I agreed with you, e.g.,
that your squasher example was more pleasant to read using generators than in
its original coroutine form.  People who want to invent brand new control
structures will be happier with Scheme anyway.




From tim.one at home.com  Sun Mar 25 10:07:09 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 25 Mar 2001 03:07:09 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDGJIAA.tim.one@home.com>

[Tim]
>> The correspondent I quoted believed the latter ["simple" generators]
>> were on-target for XSLT work ... But ... I don't know whether they're
>> sufficient for what you have in mind.

[Uche Ogbuji]
> Based on a discussion with Christian at IPC9, they are.  I should
> have been more clear about that.  My main need is to be able to change
> a bit of context and invoke a different execution path, without going
> through the full overhead of a function call.  XSLT, if written
> naturally", tends to involve huge numbers of such tweak-context-and-
> branch operations.
> ...
> Suspending only to the invoker should do the trick because it is
> typically a single XSLT instruction that governs multiple tree-
> operations with varied context.

Thank you for explaining more!  It's helpful.

> At IPC9, Guido put up a poll of likely use of stackless features,
> and it was a pretty clear arithmetic progression from those who
> wanted to use microthreads, to those who wanted co-routines, to
> those who wanted just generators.  The generator folks were
> probably 2/3 of the assembly.  Looks as if many have decided,
> and they seem to agree with you.

They can't:  I haven't taken a position <0.5 wink>.  As I said, I'm trying to
get closer to understanding the cost/benefit tradeoffs here.

I've been nagging in favor of simple generators for a decade now, and every
time I've tried they've gotten hijacked by some grander scheme with much
muddier tradeoffs.  That's been very frustrating, since I've had good uses
for simple generators darned near every day of my Python life, and "the only
thing stopping them" has been a morbid fascination with Scheme's mistakes
<wink>.  That phase appears to be over, and *now* "the only thing stopping
them" appears to be a healthy fascination with coroutines and uthreads.
That's cool, although this is definitely a "the perfect is the enemy of the
good" kind of thing.

trying-leave-a-better-world-for-the-children<wink>-ly y'rs  - tim




From paulp at ActiveState.com  Sun Mar 25 20:30:34 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 25 Mar 2001 10:30:34 -0800
Subject: [Python-Dev] Making types behave like classes
References: <3ABC5EE9.2943.14C818C7@localhost>
Message-ID: <3ABE3949.DE50540C@ActiveState.com>

Gordon McMillan wrote:
> 
>...
> 
> Those are introspective; typically read in the interactive
> interpreter. I can't do anything with them except read them.
>
> If you wrap, eg, __len__, what can I do with it except call it? 

You can store away a reference to it and then call it later.

I
> can already do that with len().
> 
> > Benefits:
> >
> >  * objects based on extension types would "look more like"
> >  classes to
> > Python programmers so there is less confusion about how they are
> > different
> 
> I think it would probably enhance confusion to have the "look
> more like" without "being more like".

Looking more like is the same as being more like. In other words, there
are a finite list of differences in behavior between types and classes
and I think we should chip away at them one by one with each release of
Python.

Do you think that there is a particular difference (perhaps relating to
subclassing) that is the "real" difference and the rest are just
cosmetic?

> >  * users could stop using the type() function to get concrete
> >  types and
> > instead use __class__. After a version or two, type() could be
> > formally deprecated in favor of isinstance and __class__.
> 
> __class__ is a callable object. It has a __name__. From the
> Python side, a type isn't much more than an address. 

Type objects also have names. They are not (yet) callable but I cannot
think of a circumstance in which that would matter. It would require
code like this:

cls = getattr(foo, "__class__", None)
if cls:
    cls(...)

I don't know where the arglist for cls would come from. In general, I
can't imagine what the goal of this code would be. I can see code like
this in a "closed world" situation where I know all of the classes
involved, but I can't imagine a case where this kind of code will work
with any old class.

Anyhow, I think that type objects should be callable just like
classes...but I'm trying to pick off low-hanging fruit first. I think
that the less "superficial" differences there are between types and
classes, the easier it becomes to tackle the deep differences because
more code out there will be naturally polymorphic instead of using: 

if type(obj) is InstanceType: 
	do_onething() 
else: 
	do_anotherthing()

That is an evil pattern if we are going to merge types and classes.

> Until
> Python's object model is redone, there are certain objects for
> which type(o) and o.__class__ return quite different things.

I am very nervous about waiting for a big-bang re-model of the object
model.

>...
> The major lesson I draw from ExtensionClass and friends is
> that achieving this behavior in today's Python is horrendously
> complex and fragile. Until we can do it right, I'd rather keep it
> simple (and keep the warts on the surface).

I'm trying to find an incremental way forward because nobody seems to
have time or energy for a big bang.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From greg at cosc.canterbury.ac.nz  Sun Mar 25 23:53:02 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 26 Mar 2001 09:53:02 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <m14gEEA-000CnEC@artcom0.artcom-gmbh.de>
Message-ID: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz>

pf at artcom-gmbh.de (Peter Funk):

> All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> in this respect.

I don't think you can call that a "flaw", given that these
filemanagers are only designed to deal with Unix file systems.

I think it's reasonable to only expect things in the platform
os module to deal with the platform's native file system.
Trying to anticipate how every platform's cross-platform
file servers for all other platforms are going to store their
data just isn't practical.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From guido at digicool.com  Mon Mar 26 04:03:52 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 25 Mar 2001 21:03:52 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: Your message of "Mon, 26 Mar 2001 09:53:02 +1200."
             <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> 
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103260203.VAA05048@cj20424-a.reston1.va.home.com>

> > All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> > in this respect.
> 
> I don't think you can call that a "flaw", given that these
> filemanagers are only designed to deal with Unix file systems.
> 
> I think it's reasonable to only expect things in the platform
> os module to deal with the platform's native file system.
> Trying to anticipate how every platform's cross-platform
> file servers for all other platforms are going to store their
> data just isn't practical.

You say that now, but as such cross-system servers become more common,
we should expect the tools to deal with them well, rather than
complain "the other guy doesn't play by our rules".

--Guido van Rossum (home page: http://www.python.org/~guido/)



From gmcm at hypernet.com  Mon Mar 26 04:44:59 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sun, 25 Mar 2001 21:44:59 -0500
Subject: [Python-Dev] Making types behave like classes
In-Reply-To: <3ABE3949.DE50540C@ActiveState.com>
Message-ID: <3ABE66DB.18389.1CB7239A@localhost>

[Gordon]
> > I think it would probably enhance confusion to have the "look
> > more like" without "being more like".
[Paul] 
> Looking more like is the same as being more like. In other words,
> there are a finite list of differences in behavior between types
> and classes and I think we should chip away at them one by one
> with each release of Python.

There's only one difference that matters: subclassing. I don't 
think there's an incremental path to that that leaves Python 
"easily extended".

[Gordon]
> > __class__ is a callable object. It has a __name__. From the
> > Python side, a type isn't much more than an address. 
> 
> Type objects also have names. 

But not a __name__.

> They are not (yet) callable but I
> cannot think of a circumstance in which that would matter. 

Take a look at copy.py.

> Anyhow, I think that type objects should be callable just like
> classes...but I'm trying to pick off low-hanging fruit first. I
> think that the less "superficial" differences there are between
> types and classes, the easier it becomes to tackle the deep
> differences because more code out there will be naturally
> polymorphic instead of using: 
> 
> if type(obj) is InstanceType: 
>  do_onething() 
> else: 
>  do_anotherthing()
> 
> That is an evil pattern if we are going to merge types and
> classes.

And it would likely become:
 if callable(obj.__class__):
   ....

Explicit is better than implicit for warts, too.
 


- Gordon



From moshez at zadka.site.co.il  Mon Mar 26 12:27:37 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 26 Mar 2001 12:27:37 +0200
Subject: [Python-Dev] sandbox?
Message-ID: <E14hUDp-0003tf-00@darjeeling>

I remember there was the discussion here about sandbox, but
I'm not sure I understand the rules. Checkin without asking
permission to sandbox ok? Just make my private dir and checkin
stuff?

Anybody who feels he can speak with authority is welcome ;-)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From mwh21 at cam.ac.uk  Mon Mar 26 15:18:26 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 26 Mar 2001 14:18:26 +0100
Subject: [Python-Dev] Re: Alleged deprecation of shutils
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com>
Message-ID: <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> > > All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> > > in this respect.
> > 
> > I don't think you can call that a "flaw", given that these
> > filemanagers are only designed to deal with Unix file systems.
> > 
> > I think it's reasonable to only expect things in the platform
> > os module to deal with the platform's native file system.
> > Trying to anticipate how every platform's cross-platform
> > file servers for all other platforms are going to store their
> > data just isn't practical.
> 
> You say that now, but as such cross-system servers become more common,
> we should expect the tools to deal with them well, rather than
> complain "the other guy doesn't play by our rules".

So, a goal for 2.2: getting moving/copying/deleting of files and
directories working properly (ie. using native APIs) on all major
supported platforms, with all the legwork that implies.  We're not
really very far from this now, are we?  Perhaps (the functionality of)
shutil.{rmtree,copy,copytree} should move into os and if necessary be
implemented in nt or dos or mac or whatever.  Any others?

Cheers,
M.

-- 
39. Re graphics:  A picture is worth 10K  words - but only those
    to describe the picture. Hardly any sets of 10K words can be
    adequately described with pictures.
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From jack at oratrix.nl  Mon Mar 26 16:26:41 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 26 Mar 2001 16:26:41 +0200
Subject: [Python-Dev] Re: Alleged deprecation of shutils 
In-Reply-To: Message by Michael Hudson <mwh21@cam.ac.uk> ,
	     26 Mar 2001 14:18:26 +0100 , <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <20010326142642.48DE836B2C0@snelboot.oratrix.nl>

> > You say that now, but as such cross-system servers become more common,
> > we should expect the tools to deal with them well, rather than
> > complain "the other guy doesn't play by our rules".
> 
> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.

Well, if we want to support the case Guido sketches, a machine on one platform 
being fileserver for another platform, things may well be bleak.

For instance, most Apple-fileservers for Unix will use the .HSResource 
directory to store resource forks and the .HSancillary file to store mac 
file-info, but not all do. I didn't try it yet, but from what I've read MacOSX 
over NFS uses a different scheme.

But, all that said, if we look only at a single platform the basic 
functionality of shutils should work. There's a Mac module (macostools) that 
has most of the functionality, but of course not all, and it has some extra as 
well, and not all names are the same (shutil compatibility wasn't a goal when 
it was written).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From guido at digicool.com  Mon Mar 26 16:33:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 09:33:00 -0500
Subject: [Python-Dev] sandbox?
In-Reply-To: Your message of "Mon, 26 Mar 2001 12:27:37 +0200."
             <E14hUDp-0003tf-00@darjeeling> 
References: <E14hUDp-0003tf-00@darjeeling> 
Message-ID: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>

> I remember there was the discussion here about sandbox, but
> I'm not sure I understand the rules. Checkin without asking
> permission to sandbox ok? Just make my private dir and checkin
> stuff?
> 
> Anybody who feels he can speak with authority is welcome ;-)

We appreciate it if you ask first, but yes, sandbox is just what it
says.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 26 17:32:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 10:32:09 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: Your message of "26 Mar 2001 14:18:26 +0100."
             <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk> 
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com>  
            <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103261532.KAA06398@cj20424-a.reston1.va.home.com>

> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.  We're not
> really very far from this now, are we?  Perhaps (the functionality of)
> shutil.{rmtree,copy,copytree} should move into os and if necessary be
> implemented in nt or dos or mac or whatever.  Any others?

Given that it's currently in shutil, please just consider improving
that, unless you believe that the basic API should be completely
different.  This sounds like something PEP-worthy!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Mon Mar 26 17:49:10 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 26 Mar 2001 17:49:10 +0200
Subject: [Python-Dev] sandbox?
In-Reply-To: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>
References: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>, <E14hUDp-0003tf-00@darjeeling>
Message-ID: <E14hZF0-0004Mj-00@darjeeling>

On Mon, 26 Mar 2001 09:33:00 -0500, Guido van Rossum <guido at digicool.com> wrote:
 
> We appreciate it if you ask first, but yes, sandbox is just what it
> says.

OK, thanks.
I want to checkin my Rational class to the sandbox, probably make
a directory rational/ and put it there.
 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From jeremy at alum.mit.edu  Mon Mar 26 19:57:26 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 26 Mar 2001 12:57:26 -0500 (EST)
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
References: <20010324214748.A32161@glacier.fnational.com>
	<LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
Message-ID: <15039.33542.399553.604556@slothrop.digicool.com>

>>>>> "TP" == Tim Peters <tim.one at home.com> writes:

  >> I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
  >> and frame.resume() low level interface is nice.  I think Jython
  >> must know which frames are going to be suspended at compile time.

  TP> Yes, Samuele said as much.  My belief is that generators don't
  TP> become *truly* pleasant unless "yield" ("suspend"; whatever) is
  TP> made a new statement type.  Then Jython knows exactly where
  TP> yields can occur.  As in CLU (but not Icon), it would also be
  TP> fine by me if routines *used* as generators also needed to be
  TP> explicitly marked as such (this is a non-issue in Icon because
  TP> *every* Icon expression "is a generator" -- there is no other
  TP> kind of procedure there).

If "yield" is a keyword, then any function that uses yield is a
generator.  With this policy, it's straightforward to determine which
functions are generators at compile time.  It's also Pythonic:
Assignment to a name denotes local scope; use of yield denotes
generator. 

Jeremy



From jeremy at digicool.com  Mon Mar 26 21:49:31 2001
From: jeremy at digicool.com (Jeremy Hylton)
Date: Mon, 26 Mar 2001 14:49:31 -0500 (EST)
Subject: [Python-Dev] SF bugs tracker?
Message-ID: <15039.40267.489930.186757@localhost.localdomain>

I've been unable to reach the bugs tracker today.  Every attempt
results in a document-contains-no-data error.  Has anyone else had any
luck?

Jeremy




From jack at oratrix.nl  Mon Mar 26 21:55:40 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 26 Mar 2001 21:55:40 +0200
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: Message by "Tim Peters" <tim.one@home.com> ,
	     Wed, 21 Mar 2001 15:18:54 -0500 , <LNBBLJKPBEHFEDALKOLCMEEOJHAA.tim.one@home.com> 
Message-ID: <20010326195546.238C0EDD21@oratrix.oratrix.nl>

Well, it turns out that disabling fused-add-mul indeed fixes the
problem. The CodeWarrior manual warns that results may be slightly
different with and without fused instructions, but the example they
give is with operations apparently done in higher precision with the
fused instructions. No word about nonstandard behaviour for +0.0 and
-0.0.

As this seems to be a PowerPC issue, not a MacOS issue, it is
something that other PowerPC porters may want to look out for too
(does AIX still exist?).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From guido at digicool.com  Mon Mar 26 10:14:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 03:14:14 -0500
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: Your message of "Mon, 26 Mar 2001 14:49:31 EST."
             <15039.40267.489930.186757@localhost.localdomain> 
References: <15039.40267.489930.186757@localhost.localdomain> 
Message-ID: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>

> I've been unable to reach the bugs tracker today.  Every attempt
> results in a document-contains-no-data error.  Has anyone else had any
> luck?

This is a bizarre SF bug.  When you're browsing patches, clicking on
Bugs will give you this error, and vice versa.

My workaround: go to my personal page, click on a bug listed there,
and make an empty change (i.e. click Submit Changes without making any
changes).  This will present the Bugs browser.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 26 11:46:48 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 04:46:48 -0500
Subject: [Python-Dev] WANTED: chairs for next Python conference
Message-ID: <200103260946.EAA02170@cj20424-a.reston1.va.home.com>

I'm looking for chairs for the next Python conference.  At least the
following positions are still open: BOF chair (new!), Application
track chair, Tools track chair.  (The Apps and Tools tracks are
roughly what the Zope and Apps tracks were this year.)  David Ascher
is program chair, I am conference chair (again).

We're in the early stages of conference organization; Foretec is
looking at having it in a Southern city in the US, towards the end of
February 2002.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Tue Mar 27 00:06:42 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 26 Mar 2001 14:06:42 -0800
Subject: [Python-Dev] Making types behave like classes
References: <3ABE66DB.18389.1CB7239A@localhost>
Message-ID: <3ABFBD72.30F69817@ActiveState.com>

Gordon McMillan wrote:
> 
>..
> 
> There's only one difference that matters: subclassing. I don't
> think there's an incremental path to that that leaves Python
> "easily extended".

All of the differences matter! Inconsistency is a problem in and of
itself.

> But not a __name__.

They really do have __name__s. Try it. type("").__name__

> 
> > They are not (yet) callable but I
> > cannot think of a circumstance in which that would matter.
> 
> Take a look at copy.py.

copy.py only expects the type object to be callable WHEN there is a
getinitargs method. Types won't have this method so it won't use the
class callably. Plus, the whole section only gets run for objects of
type InstanceType.

The important point is that it is not useful to know that __class__ is
callable without knowing the arguments it takes. __class__ is much more
often used as a unique identifier for pointer equality and/or for the
__name__. In looking through the standard library, I can only see places
that the code would improve if __class__ were available for extension
objects.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From tim.one at home.com  Tue Mar 27 00:08:30 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 26 Mar 2001 17:08:30 -0500
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: <20010326195546.238C0EDD21@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEHPJIAA.tim.one@home.com>

[Jack Jansen]
> Well, it turns out that disabling fused-add-mul indeed fixes the
> problem. The CodeWarrior manual warns that results may be slightly
> different with and without fused instructions, but the example they
> give is with operations apparently done in higher precision with the
> fused instructions. No word about nonstandard behaviour for +0.0 and
> -0.0.
>
> As this seems to be a PowerPC issue, not a MacOS issue, it is
> something that other PowerPC porters may want to look out for too
> (does AIX still exist?).

The PowerPC architecture's fused instructions are wonderful for experts,
because in a*b+c (assuming IEEE doubles w/ 53 bits of precision) they compute
the a*b part to 106 bits of precision internally, and the add of c gets to
see all of them.  This is great if you *know* c is pretty much the negation
of the high-order 53 bits of the product, because it lets you get at the
*lower* 53 bits too; e.g.,

    hipart = a*b;
    lopart = a*b - hipart;  /* assuming fused mul-sub is generated */

gives a pair of doubles (hipart, lopart) whose mathematical (not f.p.) sum
hipart + lopart is exactly equal to the mathematical (not f.p.) product a*b.
In the hands of an expert, this can, e.g., be used to write ultra-fast
high-precision math libraries:  it gives a very cheap way to get the effect
of computing with about twice the native precision.

So that's the kind of thing they're warning you about:  without the fused
mul-sub, "lopart" above is always computed to be exactly 0.0, and so is
useless.  Contrarily, some fp algorithms *depend* on cancelling out oodles of
leading bits in intermediate results, and in the presence of fused mul-add
deliver totally bogus results.

However, screwing up 0's sign bit has nothing to do with any of that, and if
the HW is producing -0 for a fused (+anything)*(+0)-(+0), it can't be called
anything other than a HW bug (assuming it's not in the to-minus-infinity
rounding mode).

When a given compiler generates fused instructions (when available) is a
x-compiler crap-shoot, and the compiler you're using *could* have generated
them before with the same end result.  There's really nothing portable we can
do in the source code to convince a compiler never to generate them.  So
looks like you're stuck with a compiler switch here.

not-the-outcome-i-was-hoping-for-but-i'll-take-it<wink>-ly y'rs  - tim




From tim.one at home.com  Tue Mar 27 00:08:37 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 26 Mar 2001 17:08:37 -0500
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>

[Jeremy]
> I've been unable to reach the bugs tracker today.  Every attempt
> results in a document-contains-no-data error.  Has anyone else had any
> luck?

[Guido]
> This is a bizarre SF bug.  When you're browsing patches, clicking on
> Bugs will give you this error, and vice versa.
>
> My workaround: go to my personal page, click on a bug listed there,
> and make an empty change (i.e. click Submit Changes without making any
> changes).  This will present the Bugs browser.

Possibly unique to Netscape?  I've never seen this behavior -- although
sometimes I have trouble getting to *patches*, but only when logged in.

clear-the-cache-and-reboot<wink>-ly y'rs  - tim




From moshez at zadka.site.co.il  Tue Mar 27 00:26:44 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Tue, 27 Mar 2001 00:26:44 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
Message-ID: <E14hfRk-00051d-00@darjeeling>

Greetings, earthlings!

As Guido said in the last conference, there is going to be a bugfix release
of Python 2.0, Python 2.0.1. Originally meant to be only a license bugfix
release, comments in the Python community have indicated a need for a real
bugfix release. PEP 6[1] has been written by Aahz, which outlines a procedure
for such releases. With Guido's blessing, I have volunteered to be the
Patch Czar (see the PEP!) for the 2.0.1 release. In this job, I intend
to be feared and hated throughout the Python community -- men will 
tremble to hear the sounds of my footsteps...err...sorry, got sidetracked.

This is the first Python pure bugfix release, and I feel a lot of weight
rests on my shoulders as to whether this experiment is successful. Since
this is the first bugfix release, I intend to be ultra-super-conservative.
I can live with a release that does not fix all the bug, I am very afraid
of a release that breaks a single person's code. Such a thing will give
Python bugfix releases a very bad reputation. So, I am going to be a very
strict Czar.

I will try to follow consistent rules about which patches to integrate,
but I am only human. I will make all my decisions in the public, so they
will be up for review of the community.

There are a few rules I intend to go by

1. No fixes which you have to change your code to enjoy. (E.g., adding a new
   function because the previous API was idiotic)
2. No fixes which have not been applied to the main branch, unless they
   are not relevant to the main branch at all. I much prefer to get a pointer
   to an applied patch or cvs checkin message then a fresh patch. Of course,
   there are cases where this is impossible, so this isn't strict.
3. No fixes which have "stricter checking". Stricter checking is a good
   thing, but not in bug fix releases.
4. No fixes which have a reasonable chance to break someone's code. That
   means that if there's a bug people have a good change of counting on,
   it won't be fix.
5. No "improved documentation/error message" patches. This is stuff that
   gets in people's eyeballs -- I want bugfix upgrade to be as smooth
   as possible.
6. No "internal code was cleaned up". That's a good thing in the development
   branch, but not in bug fix releases.

Note that these rules will *not* be made more lenient, but they might
get stricter, if it seems such strictness is needed in order to make
sure bug fix releases are smooth enough.

However, please remember that this is intended to help you -- the Python
using community. So please, let me know of bugfixes that you need or want
in Python 2.0. I promise that I will consider every request.
Note also, that the Patch Czar is given very few responsibilities ---
all my decisions are subject to Guido's approval. That means that he
gets the final word about each patch.

I intend to post a list of patches I intend to integrate soon -- at the
latest, this Friday, hopefully sooner. I expect to have 2.0.1a1 a week
after that, and further schedule requirements will follow from the
quality of that release. Because it has the dual purpose of also being
a license bugfix release, schedule might be influenced by non-technical
issues. As always, Guido will be the final arbitrator.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 27 01:00:24 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 27 Mar 2001 01:00:24 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
Message-ID: <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>

> I have volunteered to be the Patch Czar (see the PEP!) for the 2.0.1
> release

Great!

> So please, let me know of bugfixes that you need or want in Python
> 2.0.

In addition to your procedures (which are all very reasonable), I'd
like to point out that Tim has created a 2.0.1 patch class on the SF
patch manager. I hope you find the time to review the patches in there
(which should not be very difficult at the moment). This is meant for
patches which can't be proposed in terms of 'cvs diff' commands; for
mere copying of code from the mainline, this is probably overkill.

Also note that I have started to give a detailed analysis of what
exactly has changed in the NEWS file of the 2.0 maintainance branch -
I'm curious to know what you think about procedure. If you don't like
it, feel free to undo my changes there.

Regards,
Martin



From guido at digicool.com  Mon Mar 26 13:23:08 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 06:23:08 -0500
Subject: [Python-Dev] Release 2.0.1: Heads Up
In-Reply-To: Your message of "Tue, 27 Mar 2001 01:00:24 +0200."
             <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de> 
References: <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de> 
Message-ID: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>

> > I have volunteered to be the Patch Czar (see the PEP!) for the 2.0.1
> > release
> 
> Great!

Congratulations to Moshe.

> > So please, let me know of bugfixes that you need or want in Python
> > 2.0.
> 
> In addition to your procedures (which are all very reasonable), I'd
> like to point out that Tim has created a 2.0.1 patch class on the SF
> patch manager. I hope you find the time to review the patches in there
> (which should not be very difficult at the moment). This is meant for
> patches which can't be proposed in terms of 'cvs diff' commands; for
> mere copying of code from the mainline, this is probably overkill.
> 
> Also note that I have started to give a detailed analysis of what
> exactly has changed in the NEWS file of the 2.0 maintainance branch -
> I'm curious to know what you think about procedure. If you don't like
> it, feel free to undo my changes there.

Regardless of what Moshe thinks, *I* think that's a great idea.  I
hope that Moshe continues this.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Tue Mar 27 01:35:55 2001
From: aahz at panix.com (aahz at panix.com)
Date: Mon, 26 Mar 2001 15:35:55 -0800 (PST)
Subject: [Python-Dev] PEP 6 cleanup
Message-ID: <200103262335.SAA22663@panix3.panix.com>

Now that Moshe has agreed to be Patch Czar for 2.0.1, I'd like some
clarification/advice on a couple of issues before I release the next
draft:

Issues To Be Resolved

    What is the equivalent of python-dev for people who are responsible
    for maintaining Python?  (Aahz proposes either python-patch or
    python-maint, hosted at either python.org or xs4all.net.)

    Does SourceForge make it possible to maintain both separate and
    combined bug lists for multiple forks?  If not, how do we mark bugs
    fixed in different forks?  (Simplest is to simply generate a new bug
    for each fork that it gets fixed in, referring back to the main bug
    number for details.)



From moshez at zadka.site.co.il  Tue Mar 27 01:49:33 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Tue, 27 Mar 2001 01:49:33 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
In-Reply-To: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>
References: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>, <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>
Message-ID: <E14hgjt-0005KI-00@darjeeling>

On Mon, 26 Mar 2001 06:23:08 -0500, Guido van Rossum <guido at digicool.com> wrote:

> > Also note that I have started to give a detailed analysis of what
> > exactly has changed in the NEWS file of the 2.0 maintainance branch -
> > I'm curious to know what you think about procedure. If you don't like
> > it, feel free to undo my changes there.
> 
> Regardless of what Moshe thinks, *I* think that's a great idea.  I
> hope that Moshe continues this.

I will, I think this is a good idea too.
I'm still working on a log to detail the patches I intend to backport
(some will take some effort because of several major overhauls I do
*not* intend to backport, like reindentation and string methods)
I already trimmed it down to 200-something patches I'm going to think
of integrating, and I'm now making a second pass over it. 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From nas at python.ca  Tue Mar 27 06:43:33 2001
From: nas at python.ca (Neil Schemenauer)
Date: Mon, 26 Mar 2001 20:43:33 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>; from tim.one@home.com on Sun, Mar 25, 2001 at 01:11:58AM -0500
References: <20010324214748.A32161@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
Message-ID: <20010326204333.A17390@glacier.fnational.com>

Tim Peters wrote:
> My belief is that generators don't become *truly* pleasant
> unless "yield" ("suspend"; whatever) is made a new statement
> type.

That's fine but how do you create a generator?  I suspose that
using a "yield" statement within a function could make it into a
generator.   Then, calling it would create an instance of a
generator.  Seems a bit too magical to me.

  Neil



From nas at arctrix.com  Tue Mar 27 07:08:24 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 26 Mar 2001 21:08:24 -0800
Subject: [Python-Dev] nano-threads?
Message-ID: <20010326210824.B17390@glacier.fnational.com>

Here are some silly bits of code implementing single frame
coroutines and threads using my frame suspend/resume patch.
The coroutine example does not allow a value to be passed but
that would be simple to add.  An updated version of the (very
experimental) patch is here:

    http://arctrix.com/nas/generator3.diff

For me, thinking in terms of frames is quite natural and I didn't
have any trouble writing these examples.  I'm hoping they will be
useful to other people who are trying to get their mind around
continuations.  If your sick of such postings on python-dev flame
me privately and I will stop.  Cheers,

  Neil

#####################################################################
# Single frame threads (nano-threads?).  Output should be:
#
# foo
# bar
# foo
# bar
# bar

import sys

def yield():
    f = sys._getframe(1)
    f.suspend(f)

def run_threads(threads):
    frame = {}
    for t in threads:
        frame[t] = t()
    while threads:
        for t in threads[:]:
            f = frame.get(t)
            if not f:
                threads.remove(t)
            else:
                frame[t] = f.resume()


def foo():
    for x in range(2):
        print "foo"
        yield()

def bar():
    for x in range(3):
        print "bar"
        yield()

def test():
    run_threads([foo, bar])

test()

#####################################################################
# Single frame coroutines.  Should print:
#
# foo
# bar
# baz
# foo
# bar
# baz
# foo
# ...

import sys

def transfer(func):
    f = sys._getframe(1)
    f.suspend((f, func))

def run_coroutines(args):
    funcs = {}
    for f in args:
        funcs[f] = f
    current = args[0]
    while 1:
        rv = funcs[current]()
        if not rv:
            break
        (frame, next) = rv
        funcs[current] = frame.resume
        current = next


def foo():
    while 1:
        print "foo"
        transfer(bar)

def bar():
    while 1:
        print "bar"
        transfer(baz)
        transfer(foo)



From greg at cosc.canterbury.ac.nz  Tue Mar 27 07:48:24 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 27 Mar 2001 17:48:24 +1200 (NZST)
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <15039.33542.399553.604556@slothrop.digicool.com>
Message-ID: <200103270548.RAA09571@s454.cosc.canterbury.ac.nz>

Jeremy Hylton <jeremy at alum.mit.edu>:

> If "yield" is a keyword, then any function that uses yield is a
> generator.  With this policy, it's straightforward to determine which
> functions are generators at compile time.

But a function which calls a function that contains
a "yield" is a generator, too. Does the compiler need
to know about such functions?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From jeremy at digicool.com  Tue Mar 27 19:06:20 2001
From: jeremy at digicool.com (Jeremy Hylton)
Date: Tue, 27 Mar 2001 12:06:20 -0500 (EST)
Subject: [Python-Dev] distutils change breaks code, Pyfort
In-Reply-To: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>
References: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>
Message-ID: <15040.51340.820929.133487@localhost.localdomain>

>>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:

  PFD> The requirement of a version argument to the distutils command
  PFD> breaks Pyfort and many of my existing packages. These packages
  PFD> are not intended for use with the distribution commands and a
  PFD> package version number would be meaningless.

  PFD> I will make a new Pyfort that supplies a version number to the
  PFD> call it makes to setup. However, I think this change to
  PFD> distutils is a poor idea. If the version number would be
  PFD> required for the distribution commands, let *them* complain,
  PFD> perhaps by setting a default value of
  PFD> time.asctime(time.gmtime()) or something that the distribution
  PFD> commands could object to.

  PFD> I apologize if I missed an earlier discussion of this change
  PFD> that seems to be in 2.1b2 but not 2.1b1, as I am new to this
  PFD> list.

I haven't read any discussion of distutils changes that was discussed
on this list.  It's a good question, though.  Should distutils be
allowed to change between beta releases in a way that breaks user
code?

There are two possibilities:

1. Guido has decided that distutils release cycles need not be related
   to Python release cycles.  He has said as much for pydoc.  If so,
   the timing of the change is just an unhappy coincidence.

2. Distutils is considered to be part of the standard library and
   should follow the same rules as the rest of the library.  No new
   features after the first beta release, just bug fixes.  And no
   incompatible changes without ample warning.

I think that distutils is mature enough to follow the second set of
rules -- and that the change should be reverted before the final
release.

Jeremy




From gward at python.net  Tue Mar 27 19:09:15 2001
From: gward at python.net (Greg Ward)
Date: Tue, 27 Mar 2001 12:09:15 -0500
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Sat, Mar 24, 2001 at 01:02:53PM +0100
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
Message-ID: <20010327120915.A16082@cthulhu.gerg.ca>

On 24 March 2001, Martin von Loewis said:
> There should be a mechanism to tell setup.py not to build a module at
> all. Since it is looking into Modules/Setup anyway, perhaps a
> 
> *excluded*
> dbm
> 
> syntax in Modules/Setup would be appropriate? Of course, makesetup
> needs to be taught such a syntax. Alternatively, an additional
> configuration file or command line options might work.

FWIW, any new "Setup" syntax would also have to be taught to the
'read_setup_file()' function in distutils.extension.

        Greg
-- 
Greg Ward - nerd                                        gward at python.net
http://starship.python.net/~gward/
We have always been at war with Oceania.



From gward at python.net  Tue Mar 27 19:13:35 2001
From: gward at python.net (Greg Ward)
Date: Tue, 27 Mar 2001 12:13:35 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>; from mwh21@cam.ac.uk on Mon, Mar 26, 2001 at 02:18:26PM +0100
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com> <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <20010327121335.B16082@cthulhu.gerg.ca>

On 26 March 2001, Michael Hudson said:
> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.  We're not
> really very far from this now, are we?  Perhaps (the functionality of)
> shutil.{rmtree,copy,copytree} should move into os and if necessary be
> implemented in nt or dos or mac or whatever.  Any others?

The code already exists, in distutils/file_utils.py.  It's just a
question of giving it a home in the main body of the standard library.

(FWIW, the reasons I didn't patch shutil.py are 1) I didn't want to be
constraint by backward compatibility, and 2) I didn't have a time
machine to go back and change shutil.py in all existing 1.5.2
installations.)

        Greg
-- 
Greg Ward - just another /P(erl|ython)/ hacker          gward at python.net
http://starship.python.net/~gward/
No animals were harmed in transmitting this message.



From guido at digicool.com  Tue Mar 27 07:33:46 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 27 Mar 2001 00:33:46 -0500
Subject: [Python-Dev] distutils change breaks code, Pyfort
In-Reply-To: Your message of "Tue, 27 Mar 2001 12:06:20 EST."
             <15040.51340.820929.133487@localhost.localdomain> 
References: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>  
            <15040.51340.820929.133487@localhost.localdomain> 
Message-ID: <200103270533.AAA04707@cj20424-a.reston1.va.home.com>

> >>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:
> 
>   PFD> The requirement of a version argument to the distutils command
>   PFD> breaks Pyfort and many of my existing packages. These packages
>   PFD> are not intended for use with the distribution commands and a
>   PFD> package version number would be meaningless.
> 
>   PFD> I will make a new Pyfort that supplies a version number to the
>   PFD> call it makes to setup. However, I think this change to
>   PFD> distutils is a poor idea. If the version number would be
>   PFD> required for the distribution commands, let *them* complain,
>   PFD> perhaps by setting a default value of
>   PFD> time.asctime(time.gmtime()) or something that the distribution
>   PFD> commands could object to.
> 
>   PFD> I apologize if I missed an earlier discussion of this change
>   PFD> that seems to be in 2.1b2 but not 2.1b1, as I am new to this
>   PFD> list.
> 
> I haven't read any discussion of distutils changes that was discussed
> on this list.  It's a good question, though.  Should distutils be
> allowed to change between beta releases in a way that breaks user
> code?
> 
> There are two possibilities:
> 
> 1. Guido has decided that distutils release cycles need not be related
>    to Python release cycles.  He has said as much for pydoc.  If so,
>    the timing of the change is just an unhappy coincidence.
> 
> 2. Distutils is considered to be part of the standard library and
>    should follow the same rules as the rest of the library.  No new
>    features after the first beta release, just bug fixes.  And no
>    incompatible changes without ample warning.
> 
> I think that distutils is mature enough to follow the second set of
> rules -- and that the change should be reverted before the final
> release.
> 
> Jeremy

I agree.  *Allowing* a version argument is fine.  *Requiring* it is
too late in the game.  (And may be a wrong choice anyway, but I'm not
sure of the issues.)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at acm.org  Wed Mar 28 16:39:42 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Wed, 28 Mar 2001 09:39:42 -0500 (EST)
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>
References: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>
Message-ID: <15041.63406.740044.659810@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Possibly unique to Netscape?  I've never seen this behavior -- although
 > sometimes I have trouble getting to *patches*, but only when logged in.

  No -- I was getting this with Konqueror as well.  Konqueror is the
KDE 2 browser/file manager.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From moshez at zadka.site.co.il  Wed Mar 28 19:02:01 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 19:02:01 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
Message-ID: <E14iJKb-0000Kf-00@darjeeling>

After labouring over the list of log messages for 2-3 days, I finally
have a tentative list of changes. I present it as a list of checkin
messages, complete with the versions. Sometimes I concatenated several
consecutive checkins into one -- "I fixed the bug", "oops, typo last
fix" and similar.

Please go over the list and see if there's anything you feel should
not go.
I'll write a short script that will dump patches files later today,
so I can start applying soon -- so please looking at it and see
I have not made any terrible mistakes.
Thanks in advance

Wholesale: Lib/tempfile.py (modulu __all__)
           Lib/sre.py
           Lib/sre_compile.py
           Lib/sre_constants.py
           Lib/sre_parse.py
           Modules/_sre.c          
----------------------------
Lib/locale.py, 1.15->1.16
setlocale(): In _locale-missing compatibility function, string
comparison should be done with != instead of "is not".
----------------------------
Lib/xml/dom/pulldom.py, 1.20->1.21

When creating an attribute node using createAttribute() or
createAttributeNS(), use the parallel setAttributeNode() or
setAttributeNodeNS() to add the node to the document -- do not assume
that setAttributeNode() will operate properly for both.
----------------------------
Python/pythonrun.c, 2.128->2.129
Fix memory leak with SyntaxError.  (The DECREF was originally hidden
inside a piece of code that was deemed reduntant; the DECREF was
unfortunately *not* redundant!)
----------------------------
Lib/quopri.py, 1.10->1.11
Strip \r as trailing whitespace as part of soft line endings.

Inspired by SF patch #408597 (Walter D?rwald): quopri, soft line
breaks and CRLF.  (I changed (" ", "\t", "\r") into " \t\r".)
----------------------------
Modules/bsddbmodule.c, 1.28->1.29
Don't raise MemoryError in keys() when the database is empty.

This fixes SF bug #410146 (python 2.1b shelve is broken).
----------------------------
Lib/fnmatch.py, 1.10->1.11

Donovan Baarda <abo at users.sourceforge.net>:
Patch to make "\" in a character group work properly.

This closes SF bug #409651.
----------------------------
Objects/complexobject.c, 2.34->2.35
SF bug [ #409448 ] Complex division is braindead
http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=5470&atid=105470
Now less braindead.  Also added test_complex.py, which doesn't test much, but
fails without this patch.
----------------------------
Modules/cPickle.c, 2.54->2.55
SF bug [ #233200 ] cPickle does not use Py_BEGIN_ALLOW_THREADS.
http://sourceforge.net/tracker/?func=detail&aid=233200&group_id=5470&atid=105470
Wrapped the fread/fwrite calls in thread BEGIN_ALLOW/END_ALLOW brackets
Afraid I hit the "delete trailing whitespace key" too!  Only two "real" sections
of code changed here.
----------------------------
Lib/xml/sax/xmlreader.py, 1.13->1.14

Import the exceptions that this module can raise.
----------------------------
Lib/xmllib.py, 1.27->1.28
Moved clearing of "literal" flag.  The flag is set in setliteral which
can be called from a start tag handler.  When the corresponding end
tag is read the flag is cleared.  However, it didn't get cleared when
the start tag was for an empty element of the type <tag .../>.  This
modification fixes the problem.
----------------------------
Modules/pwdmodule.c, 1.24->1.25
Modules/grpmodule.c, 1.14->1.15

Make sure we close the group and password databases when we are done with
them; this closes SF bug #407504.
----------------------------
Python/errors.c, 2.61->2.62
Objects/intobject.c, 2.55->2.56
Modules/timemodule.c, 2.107->2.108
Use Py_CHARMASK for ctype macros. Fixes bug #232787.
----------------------------
Modules/termios.c, 2.17->2.18

Add more protection around the VSWTC/VSWTCH, CRTSCTS, and XTABS symbols;
these can be missing on some (all?) Irix and Tru64 versions.

Protect the CRTSCTS value with a cast; this can be a larger value on
Solaris/SPARC.

This should fix SF tracker items #405092, #405350, and #405355.
----------------------------
Modules/pyexpat.c, 2.42->2.43

Wrap some long lines, use only C89 /* */ comments, and add spaces around
some operators (style guide conformance).
----------------------------
Modules/termios.c, 2.15->2.16

Revised version of Jason Tishler's patch to make this compile on Cygwin,
which does not define all the constants.

This closes SF tracker patch #404924.
----------------------------
Modules/bsddbmodule.c, 1.27->1.28

Gustavo Niemeyer <niemeyer at conectiva.com>:
Fixed recno support (keys are integers rather than strings).
Work around DB bug that cause stdin to be closed by rnopen() when the
DB file needed to exist but did not (no longer segfaults).

This closes SF tracker patch #403445.

Also wrapped some long lines and added whitespace around operators -- FLD.
----------------------------
Lib/urllib.py, 1.117->1.118
Fixing bug #227562 by calling  URLopener.http_error_default when
an invalid 401 request is being handled.
----------------------------
Python/compile.c, 2.170->2.171
Shuffle premature decref; nuke unreachable code block.
Fixes the "debug-build -O test_builtin.py and no test_b2.pyo" crash just
discussed on Python-Dev.
----------------------------
Python/import.c, 2.161->2.162
The code in PyImport_Import() tried to save itself a bit of work and
save the __builtin__ module in a static variable.  But this doesn't
work across Py_Finalise()/Py_Initialize()!  It also doesn't work when
using multiple interpreter states created with PyInterpreterState_New().

So I'm ripping out this small optimization.

This was probably broken since PyImport_Import() was introduced in
1997!  We really need a better test suite for multiple interpreter
states and repeatedly initializing.

This fixes the problems Barry reported in Demo/embed/loop.c.
----------------------------
Modules/unicodedata.c, 2.9->2.11


renamed internal functions to avoid name clashes under OpenVMS
(fixes bug #132815)
----------------------------
Modules/pyexpat.c, 2.40->2.41

Remove the old version of my_StartElementHandler().  This was conditionally
compiled only for some versions of Expat, but was no longer needed as the
new implementation works for all versions.  Keeping it created multiple
definitions for Expat 1.2, which caused compilation to fail.
----------------------------
Lib/urllib.py, 1.116->1.117
provide simple recovery/escape from apparent redirect recursion.  If the
number of entries into http_error_302 exceeds the value set for the maxtries
attribute (which defaults to 10), the recursion is exited by calling
the http_error_500 method (or if that is not defined, http_error_default).
----------------------------
Modules/posixmodule.c, 2.183->2.184

Add a few more missing prototypes to the SunOS 4.1.4 section (no SF
bugreport, just an IRC one by Marion Delgado.) These prototypes are
necessary because the functions are tossed around, not just called.
----------------------------
Modules/mpzmodule.c, 2.35->2.36

Richard Fish <rfish at users.sourceforge.net>:
Fix the .binary() method of mpz objects for 64-bit systems.

[Also removed a lot of trailing whitespace elsewhere in the file. --FLD]

This closes SF patch #103547.
----------------------------
Python/pythonrun.c, 2.121->2.122
Ugly fix for SF bug 131239 (-x flag busted).
Bug was introduced by tricks played to make .pyc files executable
via cmdline arg.  Then again, -x worked via a trick to begin with.
If anyone can think of a portable way to test -x, be my guest!
----------------------------
Makefile.pre.in, 1.15->1.16
Specify directory permissions properly.  Closes SF patch #103717.
----------------------------
install-sh, 2.3->2.4
Update install-sh using version from automake 1.4.  Closes patch #103657
and #103717.
----------------------------
Modules/socketmodule.c, 1.135->1.136
Patch #103636: Allow writing strings containing null bytes to an SSL socket
----------------------------
Modules/mpzmodule.c, 2.34->2.35
Patch #103523, to make mpz module compile with Cygwin
----------------------------
Objects/floatobject.c, 2.78->2.79
SF patch 103543 from tg at freebsd.org:
PyFPE_END_PROTECT() was called on undefined var
----------------------------
Modules/posixmodule.c, 2.181->2.182
Fix Bug #125891 - os.popen2,3 and 4 leaked file objects on Windows.
----------------------------
Python/ceval.c, 2.224->2.225
SF bug #130532:  newest CVS won't build on AIX.
Removed illegal redefinition of REPR macro; kept the one with the
argument name that isn't too easy to confuse with zero <wink>.
----------------------------
Objects/classobject.c, 2.35->2.36
Rename dubiously named local variable 'cmpfunc' -- this is also a
typedef, and at least one compiler choked on this.

(SF patch #103457, by bquinlan)
----------------------------
Modules/_cursesmodule.c, 2.47->2.50
Patch #103485 from Donn Cave: patches to make the module compile on AIX and
    NetBSD
Rename 'lines' variable to 'nlines' to avoid conflict with a macro defined
    in term.h
2001/01/28 18:10:23 akuchling Modules/_cursesmodule.c
Bug #130117: add a prototype required to compile cleanly on IRIX
   (contributed by Paul Jackson)
----------------------------
Lib/statcache.py, 1.9->1.10
SF bug #130306:  statcache.py full of thread problems.
Fixed the thread races.  Function forget_dir was also utterly Unix-specific.
----------------------------
Python/structmember.c, 1.74->1.75
SF bug http://sourceforge.net/bugs/?func=detailbug&bug_id=130242&group_id=5470
SF patch http://sourceforge.net/patch/?func=detailpatch&patch_id=103453&group_id=5470
PyMember_Set of T_CHAR always raises exception.
Unfortunately, this is a use of a C API function that Python itself never makes, so
there's no .py test I can check in to verify this stays fixed.  But the fault in the
code is obvious, and Dave Cole's patch just as obviously fixes it.
----------------------------
Modules/arraymodule.c, 2.61->2.62
Correct one-line typo, reported by yole @ SF, bug 130077.
----------------------------
Python/compile.c, 2.150->2.151
Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
parameters that contained both anonymous tuples and *arg or **arg. Ex:
def f(a, (b, c), *d): pass

Fix the symtable_params() to generate names in the right order for
co_varnames slot of code object.  Consider *arg and **arg before the
"complex" names introduced by anonymous tuples.
----------------------------
Modules/config.c.in, 1.72->1.73
_PyImport_Inittab: define the exceptions module's init function.
Fixes bug #121706.
----------------------------
Python/exceptions.c, 1.19->1.20
[Ed. -- only partial]
Leak pluggin', bug fixin' and better documentin'.  Specifically,

module__doc__: Document the Warning subclass heirarchy.

make_class(): Added a "goto finally" so that if populate_methods()
fails, the return status will be -1 (failure) instead of 0 (success).

fini_exceptions(): When decref'ing the static pointers to the
exception classes, clear out their dictionaries too.  This breaks a
cycle from class->dict->method->class and allows the classes with
unbound methods to be reclaimed.  This plugs a large memory leak in a
common Py_Initialize()/dosomething/Py_Finalize() loop.
----------------------------
Python/pythonrun.c, 2.118->2.119
Lib/atexit.py, 1.3->1.4
Bug #128475: mimetools.encode (sometimes) fails when called from a thread.
pythonrun.c:  In Py_Finalize, don't reset the initialized flag until after
the exit funcs have run.
atexit.py:  in _run_exitfuncs, mutate the list of pending calls in a
threadsafe way.  This wasn't a contributor to bug 128475, it just burned
my eyeballs when looking at that bug.
----------------------------
Modules/ucnhash.c, 1.6->1.7
gethash/cmpname both looked beyond the end of the character name.
This patch makes u"\N{x}" a bit less dependent on pure luck...
----------------------------
Lib/urllib.py, 1.112->1.113
Anonymous SF bug 129288: "The python 2.0 urllib has %%%x as a format
when quoting forbidden characters. There are scripts out there that
break with lower case, therefore I guess %%%X should be used."

I agree, so am fixing this.
----------------------------
Python/bltinmodule.c, 2.191->2.192
Fix for the bug in complex() just reported by Ping.
----------------------------
Modules/socketmodule.c, 1.130->1.131
Use openssl/*.h to include the OpenSSL header files
----------------------------
Lib/distutils/command/install.py, 1.55->1.56
Modified version of a patch from Jeremy Kloth, to make .get_outputs()
produce a list of unique filenames:
    "While attempting to build an RPM using distutils on Python 2.0,
    rpm complained about duplicate files.  The following patch fixed
    that problem.
----------------------------
Objects/unicodeobject.c, 2.72->2.73
Objects/stringobject.c, 2.96->2.97
(partial)
Added checks to prevent PyUnicode_Count() from dumping core
in case the parameters are out of bounds and fixes error handling
for .count(), .startswith() and .endswith() for the case of
mixed string/Unicode objects.

This patch adds Python style index semantics to PyUnicode_Count()
indices (including the special handling of negative indices).

The patch is an extended version of patch #103249 submitted
by Michael Hudson (mwh) on SF. It also includes new test cases.
----------------------------
Modules/posixmodule.c, 2.180->2.181
Plug memory leak.
----------------------------
Python/dynload_mac.c, 2.9->2.11
Use #if TARGET_API_MAC_CARBON to determine carbon/classic macos, not #ifdef.
Added a separate extension (.carbon.slb) for Carbon dynamic modules.
----------------------------
Modules/mmapmodule.c, 2.26->2.27
SF bug 128713:  type(mmap_object) blew up on Linux.
----------------------------
Python/sysmodule.c, 2.81->2.82
stdout is sometimes a macro; use "outf" instead.

Submitted by: Mark Favas <m.favas at per.dem.csiro.au>
----------------------------
Python/ceval.c, 2.215->2.216
Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
#127699.
----------------------------
Modules/mmapmodule.c, 2.24->2.25
Windows mmap should (as the docs probably <wink> say) create a mapping
without a name when the optional tagname arg isn't specified.  Was
actually creating a mapping with an empty string as the name.
----------------------------
Lib/shlex.py, 1.10->1.11
Patch #102953: Fix bug #125452, where shlex.shlex hangs when it
    encounters a string with an unmatched quote, by adding a check for
    EOF in the 'quotes' state.
----------------------------
Modules/binascii.c, 2.27->2.28
Address a bug in the uuencode decoder, reported bu "donut" in SF bug
#127718: '@' and '`' seem to be confused.
----------------------------
Objects/fileobject.c, 2.102->2.103
Tsk, tsk, tsk.  Treat FreeBSD the same as the other BSDs when defining
a fallback for TELL64.  Fixes SF Bug #128119.
----------------------------
Modules/posixmodule.c, 2.179->2.180
Anonymous SF bug report #128053 point out that the #ifdef for
including "tmpfile" in the posix_methods[] array is wrong -- should be
HAVE_TMPFILE, not HAVE_TMPNAM.
----------------------------
Lib/urllib.py, 1.109->1.110
Fixed bug which caused HTTPS not to work at all with string URLs
----------------------------
Objects/floatobject.c, 2.76->2.77
Fix a silly bug in float_pow.  Sorry Tim.
----------------------------
Modules/fpectlmodule.c, 2.12->2.13
Patch #103012: Update fpectlmodule for current glibc;
    The _setfpucw() function/macro doesn't seem to exist any more;
    instead there's an _FPU_SETCW macro.
----------------------------
Objects/dictobject.c, 2.71->2.72
dict_update has two boundary conditions: a.update(a) and a.update({})
Added test for second one.
----------------------------
Objects/listobject.c
fix leak
----------------------------
Lib/getopt.py, 1.11->1.13
getopt used to sort the long option names, in an attempt to simplify
the logic.  That resulted in a bug.  My previous getopt checkin repaired
the bug but left the sorting.  The solution is significantly simpler if
we don't bother sorting at all, so this checkin gets rid of the sort and
the code that relied on it.
Fix for SF bug
https://sourceforge.net/bugs/?func=detailbug&bug_id=126863&group_id=5470
"getopt long option handling broken".  Tossed the excruciating logic in
long_has_args in favor of something obviously correct.
----------------------------
Lib/curses/ascii.py, 1.3->1.4
Make isspace(chr(32)) return true
----------------------------
Lib/distutils/command/install.py, 1.54->1.55
Add forgotten initialization.  Fixes bug #120994, "Traceback with
    DISTUTILS_DEBUG set"
----------------------------
Objects/unicodeobject.c, 2.68->2.69
Fix off-by-one error in split_substring().  Fixes SF bug #122162.
----------------------------
Modules/cPickle.c, 2.53->2.54
Lib/pickle.py, 1.40->1.41
Minimal fix for the complaints about pickling Unicode objects.  (SF
bugs #126161 and 123634).

The solution doesn't use the unicode-escape encoding; that has other
problems (it seems not 100% reversible).  Rather, it transforms the
input Unicode object slightly before encoding it using
raw-unicode-escape, so that the decoding will reconstruct the original
string: backslash and newline characters are translated into their
\uXXXX counterparts.

This is backwards incompatible for strings containing backslashes, but
for some of those strings, the pickling was already broken.

Note that SF bug #123634 complains specifically that cPickle fails to
unpickle the pickle for u'' (the empty Unicode string) correctly.
This was an off-by-one error in load_unicode().

XXX Ugliness: in order to do the modified raw-unicode-escape, I've
cut-and-pasted a copy of PyUnicode_EncodeRawUnicodeEscape() into this
file that also encodes '\\' and '\n'.  It might be nice to migrate
this into the Unicode implementation and give this encoding a new name
('half-raw-unicode-escape'? 'pickle-unicode-escape'?); that would help
pickle.py too.  But right now I can't be bothered with the necessary
infrastructural changes.
----------------------------
Modules/socketmodule.c, 1.129->1.130
Adapted from a patch by Barry Scott, SF patch #102875 and SF bug
#125981: closing sockets was not thread-safe.
----------------------------
Lib/xml/dom/__init__.py, 1.4->1.6

Typo caught by /F -- thanks!
DOMException.__init__():  Remember to pass self to Exception.__init__().
----------------------------
Lib/urllib.py, 1.108->1.09
(partial)
Get rid of string functions, except maketrans() (which is *not*
obsolete!).

Fix a bug in ftpwrapper.retrfile() where somehow ftplib.error_perm was
assumed to be a string.  (The fix applies str().)

Also break some long lines and change the output from test() slightly.
----------------------------
Modules/bsddbmodule.c, 1.25->1.26
[Patch #102827] Fix for PR#119558, avoiding core dumps by checking for
malloc() returning NULL
----------------------------
Lib/site.py, 1.21->1.22
The ".pth" code knew about the layout of Python trees on unix and
windows, but not on the mac. Fixed.
----------------------------
Modules/selectmodule.c, 1.83->1.84
SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.
----------------------------
Modules/parsermodule.c, 2.58->2.59

validate_varargslist():  Fix two bugs in this function, one that affected
                         it when *args and/or **kw are used, and one when
                         they are not.

This closes bug #125375: "parser.tuple2ast() failure on valid parse tree".
----------------------------
Lib/httplib.py, 1.24->1.25
Hoepeful fix for SF bug #123924: Windows - using OpenSSL, problem with
socket in httplib.py.

The bug reports that on Windows, you must pass sock._sock to the
socket.ssl() call.  But on Unix, you must pass sock itself.  (sock is
a wrapper on Windows but not on Unix; the ssl() call wants the real
socket object, not the wrapper.)

So we see if sock has an _sock attribute and if so, extract it.

Unfortunately, the submitter of the bug didn't confirm that this patch
works, so I'll just have to believe it (can't test it myself since I
don't have OpenSSL on Windows set up, and that's a nontrivial thing I
believe).
----------------------------
Python/getargs.c, 2.50->2.51
vgetargskeywords(): Patch for memory leak identified in bug #119862.
----------------------------
Lib/ConfigParser.py, 1.23->1.24

remove_option():  Use the right variable name for the option name!

This closes bug #124324.
----------------------------
Lib/filecmp.py, 1.6->1.7
Call of _cmp had wrong number of paramereters.
Fixed definition of _cmp.
----------------------------
Python/compile.c, 2.143->2.144
Plug a memory leak in com_import_stmt(): the tuple created to hold the
"..." in "from M import ..." was never DECREFed.  Leak reported by
James Slaughter and nailed by Barry, who also provided an earlier
version of this patch.
----------------------------
Objects/stringobject.c, 2.92->2.93
SF patch #102548, fix for bug #121013, by mwh at users.sourceforge.net.

Fixes a typo that caused "".join(u"this is a test") to dump core.
----------------------------
Python/marshal.c, 1.57->1.58
Python/compile.c, 2.142->2.143
SF bug 119622:  compile errors due to redundant atof decls.  I don't understand
the bug report (for details, look at it), but agree there's no need for Python
to declare atof itself:  we #include stdlib.h, and ANSI C sez atof is declared
there already.
----------------------------
Lib/webbrowser.py, 1.4->1.5
Typo for Mac code, fixing SF bug 12195.
----------------------------
Objects/fileobject.c, 2.91->2.92
Added _HAVE_BSDI and __APPLE__ to the list of platforms that require a
hack for TELL64()...  Sounds like there's something else going on
really.  Does anybody have a clue I can buy?
----------------------------
Python/thread_cthread.h, 2.13->2.14
Fix syntax error.  Submitted by Bill Bumgarner.  Apparently this is
still in use, for Apple Mac OSX.
----------------------------
Modules/arraymodule.c, 2.58->2.59
Fix for SF bug 117402, crashes on str(array) and repr(array).  This was an
unfortunate consequence of somebody switching from PyArg_Parse to
PyArg_ParseTuple but without changing the argument from a NULL to a tuple.
----------------------------
Lib/smtplib.py, 1.29->1.30
SMTP.connect(): If the socket.connect() raises a socket.error, be sure
to call self.close() to reclaim some file descriptors, the reraise the
exception.  Closes SF patch #102185 and SF bug #119833.
----------------------------
Objects/rangeobject.c, 2.20->2.22

Fixed support for containment test when a negative step is used; this
*really* closes bug #121965.

Added three attributes to the xrange object: start, stop, and step.  These
are the same as for the slice objects.

In the containment test, get the boundary condition right.  ">" was used
where ">=" should have been.

This closes bug #121965.
----------------------------
configure.in, 1.177->1.178
Fix for SF bug #117606:
  - when compiling with GCC on Solaris, use "$(CC) -shared" instead
    of "$(CC) -G" to generate .so files
  - when compiling with GCC on any platform, add "-fPIC" to OPT
    (without this, "$(CC) -shared" dies horribly)
----------------------------
configure.in, 1.175->1.176

Make sure the Modules/ directory is created before writing Modules/Setup.
----------------------------
Modules/_cursesmodule.c, 2.39->2.40
Patch from Randall Hopper to fix PR #116172, "curses module fails to
build on SGI":
* Check for 'sgi' preprocessor symbol, not '__sgi__'
* Surround individual character macros with #ifdef's, instead of making them
  all rely on STRICT_SYSV_CURSES
----------------------------
Modules/_tkinter.c, 1.114->1.115
Do not release unallocated Tcl objects. Closes #117278 and  #117167.
----------------------------
Python/dynload_shlib.c, 2.6->2.7
Patch 102114, Bug 11725.  On OpenBSD (but apparently not on the other
BSDs) you need a leading underscore in the dlsym() lookup name.
----------------------------
Lib/UserString.py, 1.6->1.7
Fix two typos in __imul__.  Closes Bug #117745.
----------------------------
Lib/mailbox.py, 1.25->1.26

Maildir.__init__():  Make sure self.boxes is set.

This closes SourceForge bug #117490.
----------------------------

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Wed Mar 28 19:51:27 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Mar 2001 12:51:27 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>

Whew!  What a thankless job, Moshe -- thank you!  Comments on a few:

> Objects/complexobject.c, 2.34->2.35
> SF bug [ #409448 ] Complex division is braindead
> http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=547
> 0&atid=105470

As we've seen, that caused a std test to fail on Mac Classic, due to an
accident of fused f.p. code generation and what sure looks like a PowerPC HW
bug.  It can also change numeric results slightly due to different order of
f.p. operations on any platform.  So this would not be a "pure bugfix" in
Aahz's view, despite that it's there purely to fix bugs <wink>.

> Modules/selectmodule.c, 1.83->1.84
> SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.

I'm afraid that boosting implementation limits has to be considered "a
feature".

> Objects/rangeobject.c, 2.20->2.22
>
> Fixed support for containment test when a negative step is used; this
> *really* closes bug #121965.
>
> Added three attributes to the xrange object: start, stop, and step.
> These are the same as for the slice objects.
>
> In the containment test, get the boundary condition right.  ">" was used
> where ">=" should have been.
>
> This closes bug #121965.

This one Aahz singled out previously as a canonical example of a patch he
would *not* include, because adding new attributes seemed potentially
disruptive to him (but why?  maybe someone was depending on the precise value
of len(dir(xrange(42)))?).




From aahz at panix.com  Wed Mar 28 19:57:49 2001
From: aahz at panix.com (aahz at panix.com)
Date: Wed, 28 Mar 2001 09:57:49 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com> from "Tim Peters" at Mar 28, 2001 12:51:27 PM
Message-ID: <200103281757.MAA04464@panix3.panix.com>

Tim:
> Moshe:
>>
>> Fixed support for containment test when a negative step is used; this
>> *really* closes bug #121965.
>>
>> Added three attributes to the xrange object: start, stop, and step.
>> These are the same as for the slice objects.
>>
>> In the containment test, get the boundary condition right.  ">" was used
>> where ">=" should have been.
>>
>> This closes bug #121965.
> 
> This one Aahz singled out previously as a canonical example of a
> patch he would *not* include, because adding new attributes seemed
> potentially disruptive to him (but why? maybe someone was depending on
> the precise value of len(dir(xrange(42)))?).

I'm not sure about this, but it seems to me that the attribute change
will generate a different .pyc.  If I'm wrong about that, this patch
as-is is fine with me; otherwise, I'd lobby to use the containment fix
but not the attributes (assuming we're willing to use part of a patch).


From mwh21 at cam.ac.uk  Wed Mar 28 20:18:28 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 28 Mar 2001 19:18:28 +0100
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Moshe Zadka's message of "Wed, 28 Mar 2001 19:02:01 +0200"
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez at zadka.site.co.il> writes:

> After labouring over the list of log messages for 2-3 days, I finally
> have a tentative list of changes. I present it as a list of checkin
> messages, complete with the versions. Sometimes I concatenated several
> consecutive checkins into one -- "I fixed the bug", "oops, typo last
> fix" and similar.
> 
> Please go over the list and see if there's anything you feel should
> not go.

I think there are some that don't apply to 2.0.1:

> Python/pythonrun.c, 2.128->2.129
> Fix memory leak with SyntaxError.  (The DECREF was originally hidden
> inside a piece of code that was deemed reduntant; the DECREF was
> unfortunately *not* redundant!)

and

> Python/compile.c, 2.150->2.151
> Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
> parameters that contained both anonymous tuples and *arg or **arg. Ex:
> def f(a, (b, c), *d): pass
> 
> Fix the symtable_params() to generate names in the right order for
> co_varnames slot of code object.  Consider *arg and **arg before the
> "complex" names introduced by anonymous tuples.

aren't meaningful without the nested scopes stuff.  But I guess you'll
notice pretty quickly if I'm right...

Otherwise, general encouragement!  Please keep it up.

Cheers,
M.

-- 
  languages shape the way we think, or don't.
                                        -- Erik Naggum, comp.lang.lisp




From jeremy at alum.mit.edu  Wed Mar 28 19:07:10 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:10 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6718.542630.936641@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/ceval.c, 2.224->2.225
> SF bug #130532:  newest CVS won't build on AIX.
> Removed illegal redefinition of REPR macro; kept the one with the
> argument name that isn't too easy to confuse with zero <wink>.

The REPR macro was not present in 2.0 and is no longer present in 2.1.

Jeremy



From guido at digicool.com  Wed Mar 28 20:21:18 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 13:21:18 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 09:57:49 PST."
             <200103281757.MAA04464@panix3.panix.com> 
References: <200103281757.MAA04464@panix3.panix.com> 
Message-ID: <200103281821.NAA10019@cj20424-a.reston1.va.home.com>

> > This one Aahz singled out previously as a canonical example of a
> > patch he would *not* include, because adding new attributes seemed
> > potentially disruptive to him (but why? maybe someone was depending on
> > the precise value of len(dir(xrange(42)))?).
> 
> I'm not sure about this, but it seems to me that the attribute change
> will generate a different .pyc.  If I'm wrong about that, this patch
> as-is is fine with me; otherwise, I'd lobby to use the containment fix
> but not the attributes (assuming we're willing to use part of a patch).

Adding attributes to xrange() can't possibly change the .pyc files.

> >From my POV, it's *real* important that .pyc files be portable between
> bugfix releases, and so far I haven't seen any argument against that
> goal.

Agreed with the goal, of course.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Wed Mar 28 19:07:03 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:03 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6711.20698.535298@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/compile.c, 2.150->2.151
> Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
> parameters that contained both anonymous tuples and *arg or **arg. Ex:
> def f(a, (b, c), *d): pass
>
> Fix the symtable_params() to generate names in the right order for
> co_varnames slot of code object.  Consider *arg and **arg before the
> "complex" names introduced by anonymous tuples.

I believe this bug report was only relevant for the compiler w/
symbol table pass introduced in Python 2.1.

Jeremy



From jeremy at alum.mit.edu  Wed Mar 28 19:07:22 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:22 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/ceval.c, 2.215->2.216
> Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
> #127699.

fast_cfunction was not present in Python 2.0.  The CALL_FUNCTION
implementation in ceval.c was rewritten for Python 2.1.

Jeremy




From moshez at zadka.site.co.il  Wed Mar 28 20:22:27 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:22:27 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>
Message-ID: <E14iKaR-0000d5-00@darjeeling>

On Wed, 28 Mar 2001 12:51:27 -0500, "Tim Peters" <tim.one at home.com> wrote:

> Whew!  What a thankless job, Moshe -- thank you!

I just wanted to keep this in to illustrate the ironical nature of the
universe ;-)

>  Comments on a few:
> 
> > Objects/complexobject.c, 2.34->2.35
> > SF bug [ #409448 ] Complex division is braindead
> > http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=547
> > 0&atid=105470
> 
> As we've seen, that caused a std test to fail on Mac Classic

OK, it's dead.

> > Modules/selectmodule.c, 1.83->1.84
> > SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.
> 
> I'm afraid that boosting implementation limits has to be considered "a
> feature".

You're right. Killed.

> > Objects/rangeobject.c, 2.20->2.22
> >
> > Fixed support for containment test when a negative step is used; this
> > *really* closes bug #121965.
> >
> > Added three attributes to the xrange object: start, stop, and step.
> > These are the same as for the slice objects.
> >
> > In the containment test, get the boundary condition right.  ">" was used
> > where ">=" should have been.
> >
> > This closes bug #121965.
> 
> This one Aahz singled out previously as a canonical example of a patch he
> would *not* include, because adding new attributes seemed potentially
> disruptive to him (but why?  maybe someone was depending on the precise value
> of len(dir(xrange(42)))?).

You're right, I forgot to (partial) this.
(partial)'s mean, BTW, that only part of the patch goes.
I do want to fix the containment, and it's in the same version upgrade.
More work for me! Yay!

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Wed Mar 28 20:25:21 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:25:21 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iKdF-0000eg-00@darjeeling>

On Wed, 28 Mar 2001, Jeremy Hylton <jeremy at alum.mit.edu> wrote:

> > Python/ceval.c, 2.215->2.216
> > Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
> > #127699.
> 
> fast_cfunction was not present in Python 2.0.  The CALL_FUNCTION
> implementation in ceval.c was rewritten for Python 2.1.

Thanks, dropped. Ditto for the REPR and the *arg parsing.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Wed Mar 28 20:30:31 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:30:31 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <200103281757.MAA04464@panix3.panix.com>
References: <200103281757.MAA04464@panix3.panix.com>
Message-ID: <E14iKiF-0000fW-00@darjeeling>

On Wed, 28 Mar 2001 09:57:49 -0800 (PST), <aahz at panix.com> wrote:
 
> From my POV, it's *real* important that .pyc files be portable between
> bugfix releases, and so far I haven't seen any argument against that
> goal.

It is a release-critical goal, yes.
It's not an argument against adding attributes to range objects.
However, adding attributes to range objects is a no-go, and it got in by
mistake.

The list should be, of course, treated as a first rough draft. I'll post a 
more complete list to p-d and p-l after it's hammered out a bit. Since
everyone who checkin stuff is on this mailing list, I wanted people
to review their own checkins first, to see I'm not making complete blunders.

Thanks a lot to Tim, Jeremy and /F for their feedback, by the way.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From aahz at panix.com  Wed Mar 28 21:06:15 2001
From: aahz at panix.com (aahz at panix.com)
Date: Wed, 28 Mar 2001 11:06:15 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 28, 2001 01:21:18 PM
Message-ID: <200103281906.OAA10976@panix6.panix.com>

Guido:
>Aahz:
>>
>> I'm not sure about this, but it seems to me that the attribute change
>> will generate a different .pyc.  If I'm wrong about that, this patch
>> as-is is fine with me; otherwise, I'd lobby to use the containment fix
>> but not the attributes (assuming we're willing to use part of a patch).
> 
> Adding attributes to xrange() can't possibly change the .pyc files.

Okay, chalk another one up to ignorance.  Another thought occurred to me
in the shower, though: would this change the pickle of xrange()?  If yes,
should pickle changes also be prohibited in bugfix releases (in the PEP)?
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Boost the stock market -- fire someone"



From guido at digicool.com  Wed Mar 28 21:12:59 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 14:12:59 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 19:02:01 +0200."
             <E14iJKb-0000Kf-00@darjeeling> 
References: <E14iJKb-0000Kf-00@darjeeling> 
Message-ID: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>

> After labouring over the list of log messages for 2-3 days, I finally
> have a tentative list of changes. I present it as a list of checkin
> messages, complete with the versions. Sometimes I concatenated several
> consecutive checkins into one -- "I fixed the bug", "oops, typo last
> fix" and similar.

Good job, Moshe!  The few where I had doubts have already been covered
by others.  As the saying goes, "check it in" :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at effbot.org  Wed Mar 28 21:21:46 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Wed, 28 Mar 2001 21:21:46 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
References: <200103281906.OAA10976@panix6.panix.com>
Message-ID: <018601c0b7bc$55d08f00$e46940d5@hagrid>

> Okay, chalk another one up to ignorance.  Another thought occurred to me
> in the shower, though: would this change the pickle of xrange()?  If yes,
> should pickle changes also be prohibited in bugfix releases (in the PEP)?

from the why-dont-you-just-try-it department:

Python 2.0 (#8, Jan 29 2001, 22:28:01) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import pickle
>>> data = xrange(10)
>>> dir(data)
['tolist']
>>> pickle.dumps(data)
Traceback (most recent call last):
...
pickle.PicklingError: can't pickle 'xrange' object: xrange(10)

Python 2.1b2 (#12, Mar 22 2001, 15:15:01) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import pickle
>>> data = xrange(10)
>>> dir(data)
['start', 'step', 'stop', 'tolist']
>>> pickle.dumps(data)
Traceback (most recent call last):
...
pickle.PicklingError: can't pickle 'xrange' object: xrange(10)

Cheers /F




From aahz at panix.com  Wed Mar 28 21:17:59 2001
From: aahz at panix.com (aahz at panix.com)
Date: Wed, 28 Mar 2001 11:17:59 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <no.id> from "Fredrik Lundh" at Mar 28, 2001 09:21:46 PM
Message-ID: <200103281917.OAA12358@panix6.panix.com>

> > Okay, chalk another one up to ignorance.  Another thought occurred to me
> > in the shower, though: would this change the pickle of xrange()?  If yes,
> > should pickle changes also be prohibited in bugfix releases (in the PEP)?
> 
> from the why-dont-you-just-try-it department:

You're right, I should have tried it.  I didn't because my shell account
still hasn't set up Python 2.0 as the default version and I haven't yet
set myself up to test beta/patch/CVS releases.  <sigh>  The more I
learn, the more ignorant I feel....
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Boost the stock market -- fire someone"



From guido at digicool.com  Wed Mar 28 21:18:26 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 14:18:26 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 11:06:15 PST."
             <200103281906.OAA10976@panix6.panix.com> 
References: <200103281906.OAA10976@panix6.panix.com> 
Message-ID: <200103281918.OAA10296@cj20424-a.reston1.va.home.com>

> > Adding attributes to xrange() can't possibly change the .pyc files.
> 
> Okay, chalk another one up to ignorance.  Another thought occurred to me
> in the shower, though: would this change the pickle of xrange()?  If yes,
> should pickle changes also be prohibited in bugfix releases (in the PEP)?

I agree that pickle changes should be prohibited, although I want to
make an exception for the fix to pickling of Unicode objects (which is
pretty broken in 2.0).

That said, xrange() objects can't be pickled, so it's a non-issue. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jack at oratrix.nl  Wed Mar 28 21:59:26 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 28 Mar 2001 21:59:26 +0200 (MET DST)
Subject: [Python-Dev] MacPython 2.1b2 available
Message-ID: <20010328195926.47261EA11F@oratrix.oratrix.nl>

MacPython 2.1b2 is available for download. Get it via
http://www.cwi.nl/~jack/macpython.html .

New in this version:
- A choice of Carbon or Classic runtime, so runs on anything between
  MacOS 8.1 and MacOS X
- Distutils support for easy installation of extension packages
- BBedit language plugin
- All the platform-independent Python 2.1 mods
- New version of Numeric
- Lots of bug fixes
- Choice of normal and active installer

Please send feedback on this release to pythonmac-sig at python.org,
where all the maccies hang out.

Enjoy,


--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From moshez at zadka.site.co.il  Wed Mar 28 21:58:23 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 21:58:23 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>
References: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iM5H-0000rB-00@darjeeling>

On 28 Mar 2001 19:18:28 +0100, Michael Hudson <mwh21 at cam.ac.uk> wrote:
 
> I think there are some that don't apply to 2.0.1:
> 
> > Python/pythonrun.c, 2.128->2.129
> > Fix memory leak with SyntaxError.  (The DECREF was originally hidden
> > inside a piece of code that was deemed reduntant; the DECREF was
> > unfortunately *not* redundant!)

OK, dead.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Wed Mar 28 22:05:38 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 22:05:38 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>
References: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iMCI-0000s2-00@darjeeling>

On Wed, 28 Mar 2001 14:12:59 -0500, Guido van Rossum <guido at digicool.com> wrote:
 
> The few where I had doubts have already been covered
> by others.  As the saying goes, "check it in" :-)

I'm afraid it will still take time to generate the patches, apply
them, test them, etc....
I was hoping to create a list of patches tonight, but I'm a bit too
dead. I'll post to p-l tommorow with the new list of patches.

PS.
Tools/script/logmerge.py loses version numbers. That pretty much
sucks for doing the work I did, even though the raw log was worse --
I ended up cross referencing and finding version numbers by hand.
If anyone doesn't have anything better to do, here's a nice gift
for 2.1 ;-)

PPS.
Most of the work I can do myself just fine. There are a couple of places
where I could *really* need some help. One of those is testing fixes
for bugs which manifest on exotic OSes (and as far as I'm concerned, 
Windows is as exotic as they come <95 wink>.) Please let me know if
you're interested in testing patches for them.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Wed Mar 28 22:19:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 15:19:19 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 22:05:38 +0200."
             <E14iMCI-0000s2-00@darjeeling> 
References: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>, <E14iJKb-0000Kf-00@darjeeling>  
            <E14iMCI-0000s2-00@darjeeling> 
Message-ID: <200103282019.PAA10717@cj20424-a.reston1.va.home.com>

> > The few where I had doubts have already been covered
> > by others.  As the saying goes, "check it in" :-)
> 
> I'm afraid it will still take time to generate the patches, apply
> them, test them, etc....

Understood!  There's no immediate hurry (except for the fear that you
might be distracted by real work :-).

> I was hoping to create a list of patches tonight, but I'm a bit too
> dead. I'll post to p-l tommorow with the new list of patches.

You're doing great.  Take some rest.

> PS.
> Tools/script/logmerge.py loses version numbers. That pretty much
> sucks for doing the work I did, even though the raw log was worse --
> I ended up cross referencing and finding version numbers by hand.
> If anyone doesn't have anything better to do, here's a nice gift
> for 2.1 ;-)

Yes, it sucks.  Feel free to check in a change into the 2.1 tree!

> PPS.
> Most of the work I can do myself just fine. There are a couple of places
> where I could *really* need some help. One of those is testing fixes
> for bugs which manifest on exotic OSes (and as far as I'm concerned, 
> Windows is as exotic as they come <95 wink>.) Please let me know if
> you're interested in testing patches for them.

PL will volunteer Win98se and Win2000 testing.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Wed Mar 28 22:25:19 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 28 Mar 2001 22:25:19 +0200
Subject: [Python-Dev] List of Patches to Go in 2.0.1
Message-ID: <200103282025.f2SKPJj04355@mira.informatik.hu-berlin.de>

> This one Aahz singled out previously as a canonical example of a patch he
> would *not* include, because adding new attributes seemed potentially
> disruptive to him (but why?  maybe someone was depending on the precise value
> of len(dir(xrange(42)))?).

There is a patch on SF which backports that change without introducing
these attributes in the 2.0.1 class.

Regards,
Martin




From martin at loewis.home.cs.tu-berlin.de  Wed Mar 28 22:39:20 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 28 Mar 2001 22:39:20 +0200
Subject: [Python-Dev] List of Patches to Go in 2.0.1
Message-ID: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>

> Modules/_tkinter.c, 1.114->1.115
> Do not release unallocated Tcl objects. Closes #117278 and  #117167.

That is already committed to the maintenance branch.

> Modules/pyexpat.c, 2.42->2.43

There is a number of memory leaks which I think should get fixed,
inside the changes:

2.33->2.34
2.31->2.32 (garbage collection, and missing free calls)

I can produce a patch that only has those changes.

Martin



From michel at digicool.com  Wed Mar 28 23:00:57 2001
From: michel at digicool.com (Michel Pelletier)
Date: Wed, 28 Mar 2001 13:00:57 -0800 (PST)
Subject: [Python-Dev] Updated, shorter PEP 245
Message-ID: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>

Hi folks,

I have broken PEP 245 into two different PEPs, the first, which is now PEP
245, covers only the syntax and the changes to the Python language.  It is
much shorter and sweeter that the old one.

The second one, yet to have a number or to be totally polished off,
describes my proposed interface *model* based on the Zope interfaces work
and the previous incarnation of PEP 245.  This next PEP is totally
independent of PEP 245, and can be accepted or rejected independent of the
syntax if a different model is desired.

In fact, Amos Latteier has proposed to me a different, simpler, though
less functional model that would make an excellent alternative.  I'll
encourage him to formalize it.  Or would it be acceptable to offer two
possible models in the same PEP?

Finally, I forsee a third PEP to cover issues beyond the model, like type
checking, interface enforcement, and formalizing well-known python
"protocols" as interfaces.  That's a work for later consideration, that is
also independent of the previous two PEPs.

The *new* PEP 245 can be found at the following link:

http://www.zope.org/Members/michel/MyWiki/InterfacesPEP/PEP245.txt

Enjoy, and please feel free to comment.

-Michel





From michel at digicool.com  Wed Mar 28 23:12:09 2001
From: michel at digicool.com (Michel Pelletier)
Date: Wed, 28 Mar 2001 13:12:09 -0800 (PST)
Subject: [Python-Dev] Updated, shorter PEP 245
In-Reply-To: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>
Message-ID: <Pine.LNX.4.32.0103281311420.3864-100000@localhost.localdomain>


On Wed, 28 Mar 2001, Michel Pelletier wrote:

> The *new* PEP 245 can be found at the following link:
>
> http://www.zope.org/Members/michel/MyWiki/InterfacesPEP/PEP245.txt

It's also available in a formatted version at the python dev site:

http://python.sourceforge.net/peps/pep-0245.html

-Michel




From moshez at zadka.site.co.il  Wed Mar 28 23:10:14 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 23:10:14 +0200
Subject: [Python-Dev] Re: List of Patches to Go in 2.0.1
In-Reply-To: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>
References: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>
Message-ID: <E14iNCo-00014t-00@darjeeling>

On Wed, 28 Mar 2001, "Martin v. Loewis" <martin at loewis.home.cs.tu-berlin.de> wrote:

> > Modules/_tkinter.c, 1.114->1.115
> > Do not release unallocated Tcl objects. Closes #117278 and  #117167.
> 
> That is already committed to the maintenance branch.

Thanks, deleted.

> > Modules/pyexpat.c, 2.42->2.43
> 
> There is a number of memory leaks which I think should get fixed,
> inside the changes:
> 
> 2.33->2.34
> 2.31->2.32 (garbage collection, and missing free calls)
> 
> I can produce a patch that only has those changes.

Yes, that would be very helpful. 
Please assign it to me if you post it at SF.
The problem I had with the XML code (which had a couple of other fixed
bugs) was that it was always "resynced with PyXML tree", which seemed
to me too large to be safe...
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From barry at digicool.com  Wed Mar 28 23:14:42 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 28 Mar 2001 16:14:42 -0500
Subject: [Python-Dev] Updated, shorter PEP 245
References: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>
Message-ID: <15042.21570.617105.910629@anthem.wooz.org>

>>>>> "MP" == Michel Pelletier <michel at digicool.com> writes:

    MP> In fact, Amos Latteier has proposed to me a different,
    MP> simpler, though less functional model that would make an
    MP> excellent alternative.  I'll encourage him to formalize it.
    MP> Or would it be acceptable to offer two possible models in the
    MP> same PEP?

It would probably be better to have them as two separate (competing)
PEPs.

-Barry



From mwh21 at cam.ac.uk  Thu Mar 29 00:55:36 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 28 Mar 2001 23:55:36 +0100
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: "Tim Peters"'s message of "Wed, 21 Mar 2001 17:30:52 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>
Message-ID: <m3g0fxcxlj.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> I'm calling this one a bug in doctest.py, and will fix it there.  Ugly:
> since we can longer rely on list.sort() not raising exceptions, it won't be
> enough to replace the existing
> 
>     for k, v in dict.items():
> 
> with
> 
>     items = dict.items()
>     items.sort()
>     for k, v in items:

Hmm, reading through these posts for summary purposes, it occurs to me
that this *is* safe, 'cause item 0 of the tuples will always be
distinct strings, and as equal-length tuples are compared
lexicographically, the values will never actually be compared!

pointless-ly y'rs
M.

-- 
93. When someone says "I want a programming language in which I
    need only say what I wish done," give him a lollipop.
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From mwh21 at cam.ac.uk  Thu Mar 29 14:06:00 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Thu, 29 Mar 2001 13:06:00 +0100 (BST)
Subject: [Python-Dev] python-dev summary, 2001-03-15 - 2001-03-29
Message-ID: <Pine.LNX.4.10.10103291304110.866-100000@localhost.localdomain>

 This is a summary of traffic on the python-dev mailing list between
 Mar 15 and Mar 28 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list at python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the fourth summary written by Michael Hudson.
 Summaries are archived at:

  <http://starship.python.net/crew/mwh/summaries/>

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 410

    50 |                 [|]                                    
       |                 [|]                                    
       |                 [|]                                    
       |                 [|]                                    
    40 |                 [|]                                    
       |                 [|] [|]                                
       | [|]             [|] [|]                                
       | [|]             [|] [|] [|]     [|]                    
    30 | [|]             [|] [|] [|]     [|]                    
       | [|]             [|] [|] [|]     [|]                    
       | [|]             [|] [|] [|]     [|] [|]                
       | [|]         [|] [|] [|] [|]     [|] [|]             [|]
    20 | [|] [|]     [|] [|] [|] [|]     [|] [|]             [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]             [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
    10 | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]
     0 +-044-024-013-029-059-046-040-022-040-031-007-019-008-028
        Thu 15| Sat 17| Mon 19| Wed 21| Fri 23| Sun 25| Tue 27|
            Fri 16  Sun 18  Tue 20  Thu 22  Sat 24  Mon 26  Wed 28

 Bug-fixing for 2.1 remained a priority for python-dev this fortnight
 which saw the release of 2.1b2 last Friday.


    * Python 2.0.1 *

 Aahz posted his first draft of PEP 6, outlining the process by which
 maintenance releases of Python should be made.

  <http://python.sourceforge.net/peps/pep-0006.html>

 Moshe Zadka has volunteered to be the "Patch Czar" for Python 2.0.1.

  <http://mail.python.org/pipermail/python-dev/2001-March/013952.html>

 I'm sure we can all join in the thanks due to Moshe for taking up
 this tedious but valuable job!


    * Simple Generator implementations *

 Neil Schemenauer posted links to a couple of "simple" implementations
 of generators (a.k.a. resumable functions) that do not depend on the
 stackless changes going in.

  <http://mail.python.org/pipermail/python-dev/2001-March/013648.html>
  <http://mail.python.org/pipermail/python-dev/2001-March/013666.html>

 These implementations have the advantage that they might be
 applicable to Jython, something that sadly cannot be said of
 stackless.
 

    * portable file-system stuff *

 The longest thread of the summary period started off with a request
 for a portable way to find out free disk space:

  <http://mail.python.org/pipermail/python-dev/2001-March/013706.html>

 After a slightly acrimonious debate about the nature of Python
 development, /F produced a patch that implements partial support for
 os.statvfs on Windows:

  <http://sourceforge.net/tracker/index.php?func=detail&aid=410547&group_id=5470&atid=305470>

 which can be used to extract such information.

 A side-product of this discussion was the observation that although
 Python has a module that does some file manipulation, shutil, it is
 far from being as portable as it might be - in particular it fails
 miserably on the Mac where it ignores resource forks.  Greg Ward then
 pointed out that he had to implement cross-platform file copying for
 the distutils

  <http://mail.python.org/pipermail/python-dev/2001-March/013962.html>

 so perhaps all that needs to be done is for this stuff to be moved
 into the core.  It seems very unlikely there will be much movement
 here before 2.2.




From fdrake at cj42289-a.reston1.va.home.com  Thu Mar 29 15:01:26 2001
From: fdrake at cj42289-a.reston1.va.home.com (Fred Drake)
Date: Thu, 29 Mar 2001 08:01:26 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010329130126.C3EED2888E@cj42289-a.reston1.va.home.com>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


For Peter Funk:  Removed space between function/method/class names and
their parameter lists for easier cut & paste.  This is a *tentative*
change; feedback is appreciated at python-docs at python.org.

Also added some new information on integrating with the cycle detector
and some additional C APIs introduced in Python 2.1 (PyObject_IsInstance(),
PyObject_IsSubclass()).




From dalke at acm.org  Fri Mar 30 01:07:17 2001
From: dalke at acm.org (Andrew Dalke)
Date: Thu, 29 Mar 2001 16:07:17 -0700
Subject: [Python-Dev] 'mapping' in weakrefs unneeded?
Message-ID: <015101c0b8a5$00c37ce0$d795fc9e@josiah>

Hello all,

  I'm starting to learn how to use weakrefs.  I'm curious
about the function named 'mapping'.  It is implemented as:

> def mapping(dict=None,weakkeys=0):
>     if weakkeys:
>         return WeakKeyDictionary(dict)
>     else:
>         return WeakValueDictionary(dict)

Why is this a useful function?  Shouldn't people just call
WeakKeyDictionary and WeakValueDictionary directly instead
of calling mapping with a parameter to specify which class
to construct?

If anything, this function is very confusing.  Take the
associated documentation as a case in point:

> mapping([dict[, weakkeys=0]]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The values from dict must be weakly referencable; if any
> values which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> If the weakkeys argument is not given or zero, the values in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> value exists anymore. 
>
> If the weakkeys argument is nonzero, the keys in the
> dictionary are weak, i.e. the entry in the dictionary is
> discarded when the last strong reference to the key is
> discarded. 

As far as I can tell, this documentation is wrong, or at
the very least confusing.  For example, it says:
> The values from dict must be weakly referencable

but when the weakkeys argument is nonzero,
> the keys in the dictionary are weak

So must both keys and values be weak?  Or only the keys?
I hope the latter since there are cases I can think of
where I want the keys to be weak and the values be types,
hence non-weakreferencable.

Wouldn't it be better to remove the 'mapping' function and
only have the WeakKeyDictionary and WeakValueDictionary.
In which case the documentation becomes:

> WeakValueDictionary([dict]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The values from dict must be weakly referencable; if any
> values which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> The values in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> value exists anymore. 

> WeakKeyDictionary([dict]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The keys from dict must be weakly referencable; if any
> keys which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> The keys in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> key exists anymore. 

Easier to read and to see the parallels between the two
styles, IMHO of course.

I am not on this list though I will try to read the
archives online for the next couple of days.  Please
CC me about any resolution to this topic.

Sincerely,

                    Andrew
                    dalke at acm.org





From martin at loewis.home.cs.tu-berlin.de  Fri Mar 30 09:55:59 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 30 Mar 2001 09:55:59 +0200
Subject: [Python-Dev] Assigning to __debug__
Message-ID: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>

After the recent change that assignments to __debug__ are disallowed,
I noticed that IDLE stops working (see SF bug report), since it was
assigning to __debug__. 

Simply commenting-out the assignment (to zero) did no good: Inside the
__debug__ blocks, IDLE would try to perform print statements, which
would write to the re-assigned sys.stdout, which would invoke the code
that had the __debug__, which would give up thanks to infinite
recursion. So essentially, you either have to remove the __debug__
blocks, or rewrite them to writing to save_stdout - in which case all
the ColorDelegator debug message appear in the terminal window.

So anybody porting to Python 2.1 will essentially have to remove all
__debug__ blocks that were previously disabled by assigning 0 to
__debug__. I think this is undesirable.

As I recall, in the original description of __debug__, being able to
assign to it was reported as one of its main features, so that you
still had a run-time option (unless the interpreter was running with
-O, which eliminates the __debug__ blocks).

So in short, I think this change should be reverted.

Regards,
Martin

P.S. What was the motivation for that change, anyway?



From mal at lemburg.com  Fri Mar 30 10:06:42 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 10:06:42 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
Message-ID: <3AC43E92.C269D98D@lemburg.com>

"Martin v. Loewis" wrote:
> 
> After the recent change that assignments to __debug__ are disallowed,
> I noticed that IDLE stops working (see SF bug report), since it was
> assigning to __debug__.
> 
> Simply commenting-out the assignment (to zero) did no good: Inside the
> __debug__ blocks, IDLE would try to perform print statements, which
> would write to the re-assigned sys.stdout, which would invoke the code
> that had the __debug__, which would give up thanks to infinite
> recursion. So essentially, you either have to remove the __debug__
> blocks, or rewrite them to writing to save_stdout - in which case all
> the ColorDelegator debug message appear in the terminal window.
> 
> So anybody porting to Python 2.1 will essentially have to remove all
> __debug__ blocks that were previously disabled by assigning 0 to
> __debug__. I think this is undesirable.
> 
> As I recall, in the original description of __debug__, being able to
> assign to it was reported as one of its main features, so that you
> still had a run-time option (unless the interpreter was running with
> -O, which eliminates the __debug__ blocks).
> 
> So in short, I think this change should be reverted.

+1 from here... 

I use the same concept for debugging: during development I set 
__debug__ to 1, in production I change it to 0 (python -O does this
for me as well).

> Regards,
> Martin
> 
> P.S. What was the motivation for that change, anyway?
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at digicool.com  Fri Mar 30 15:30:18 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 08:30:18 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 09:55:59 +0200."
             <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> 
Message-ID: <200103301330.IAA23144@cj20424-a.reston1.va.home.com>

> After the recent change that assignments to __debug__ are disallowed,
> I noticed that IDLE stops working (see SF bug report), since it was
> assigning to __debug__. 

I checked in a fix to IDLE too, but it seems you were using an
externally-installed version of IDLE.

> Simply commenting-out the assignment (to zero) did no good: Inside the
> __debug__ blocks, IDLE would try to perform print statements, which
> would write to the re-assigned sys.stdout, which would invoke the code
> that had the __debug__, which would give up thanks to infinite
> recursion. So essentially, you either have to remove the __debug__
> blocks, or rewrite them to writing to save_stdout - in which case all
> the ColorDelegator debug message appear in the terminal window.

IDLE was totally abusing the __debug__ variable -- in the fix, I
simply changed all occurrences of __debug__ to DEBUG.

> So anybody porting to Python 2.1 will essentially have to remove all
> __debug__ blocks that were previously disabled by assigning 0 to
> __debug__. I think this is undesirable.

Assigning to __debug__ was never well-defined.  You used it at your
own risk.

> As I recall, in the original description of __debug__, being able to
> assign to it was reported as one of its main features, so that you
> still had a run-time option (unless the interpreter was running with
> -O, which eliminates the __debug__ blocks).

The manual has always used words that suggest that there is something
special about __debug__.  And there was: the compiler assumed it could
eliminate blocks started with "if __debug__:" when compiling in -O
mode.  Also, assert statements have always used LOAD_GLOBAL to
retrieve the __debug__ variable.

> So in short, I think this change should be reverted.

It's possible that it breaks more code, and it's possible that we end
up having to change the error into a warning for now.  But I insist
that assignment to __debug__ should become illegal.  You can *use* the
variable (to determine whether -O is on or not), but you can't *set*
it.

> Regards,
> Martin
> 
> P.S. What was the motivation for that change, anyway?

To enforce a restriction that was always intended: __debug__ should be
a read-only variable.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Fri Mar 30 15:42:59 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 15:42:59 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
Message-ID: <3AC48D63.A8AFA489@lemburg.com>

Guido van Rossum wrote:
> > ...
> > So anybody porting to Python 2.1 will essentially have to remove all
> > __debug__ blocks that were previously disabled by assigning 0 to
> > __debug__. I think this is undesirable.
> 
> Assigning to __debug__ was never well-defined.  You used it at your
> own risk.
> 
> > As I recall, in the original description of __debug__, being able to
> > assign to it was reported as one of its main features, so that you
> > still had a run-time option (unless the interpreter was running with
> > -O, which eliminates the __debug__ blocks).
> 
> The manual has always used words that suggest that there is something
> special about __debug__.  And there was: the compiler assumed it could
> eliminate blocks started with "if __debug__:" when compiling in -O
> mode.  Also, assert statements have always used LOAD_GLOBAL to
> retrieve the __debug__ variable.
> 
> > So in short, I think this change should be reverted.
> 
> It's possible that it breaks more code, and it's possible that we end
> up having to change the error into a warning for now.  But I insist
> that assignment to __debug__ should become illegal.  You can *use* the
> variable (to determine whether -O is on or not), but you can't *set*
> it.
> 
> > Regards,
> > Martin
> >
> > P.S. What was the motivation for that change, anyway?
> 
> To enforce a restriction that was always intended: __debug__ should be
> a read-only variable.

So you are suggesting that we change all our code to something like:

__enable_debug__ = 0 # set to 0 for production mode

...

if __debug__ and __enable_debug__:
   print 'debugging information'

...

I don't see the point in having to introduce a new variable
just to disable debugging code in Python code which does not
run under -O.

What does defining __debug__ as read-only variable buy us 
in the long term ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at digicool.com  Fri Mar 30 16:02:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 09:02:35 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 15:42:59 +0200."
             <3AC48D63.A8AFA489@lemburg.com> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>  
            <3AC48D63.A8AFA489@lemburg.com> 
Message-ID: <200103301402.JAA23365@cj20424-a.reston1.va.home.com>

> So you are suggesting that we change all our code to something like:
> 
> __enable_debug__ = 0 # set to 0 for production mode
> 
> ...
> 
> if __debug__ and __enable_debug__:
>    print 'debugging information'
> 
> ...

I can't suggest anything, because I have no idea what semantics you
are assuming for __debug__ here, and I have no idea what you want with
that code.  Maybe you'll want to say "__debug__ = 1" even when you are
in -O mode -- that will definitely not work!

The form above won't (currently) be optimized out -- only "if
__debug__:" is optimized away, nothing more complicated (not even "if
(__debug__):".

In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
__UNDERSCORE__ CONVENTION!  Those names are reserved for the
interpreter, and you risk that they will be assigned a different
semantics in the future.

> I don't see the point in having to introduce a new variable
> just to disable debugging code in Python code which does not
> run under -O.
> 
> What does defining __debug__ as read-only variable buy us 
> in the long term ?

It allows the compiler to assume that __debug__ is a built-in name.
In the future, the __debug__ variable may become meaningless, as we
develop more differentiated optimization options.

The *only* acceptable use for __debug__ is to get rid of code that is
essentially an assertion but can't be spelled with just an assertion,
e.g.

def f(L):
    if __debug__:
        # Assert L is a list of integers:
        for item in L:
            assert isinstance(item, type(1))
    ...

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at pythonware.com  Fri Mar 30 16:07:08 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 30 Mar 2001 16:07:08 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>             <3AC48D63.A8AFA489@lemburg.com>  <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <018001c0b922$b58b5d50$0900a8c0@SPIFF>

guido wrote:
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!

is the "__version__" convention documented somewhere?

Cheers /F




From moshez at zadka.site.co.il  Fri Mar 30 16:21:27 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 30 Mar 2001 16:21:27 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <018001c0b922$b58b5d50$0900a8c0@SPIFF>
References: <018001c0b922$b58b5d50$0900a8c0@SPIFF>, <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>             <3AC48D63.A8AFA489@lemburg.com>  <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <E14izmJ-0006yR-00@darjeeling>

On Fri, 30 Mar 2001, "Fredrik Lundh" <fredrik at pythonware.com> wrote:
 
> is the "__version__" convention documented somewhere?

Yes. I don't remember where, but the words are something like "the __ names
are reserved for use by the infrastructure, loosly defined as the interpreter
and the standard library. Code which has aspirations to be part of the
infrastructure must use a unique prefix like __bobo_pos__"

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Fri Mar 30 16:40:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 09:40:00 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 16:07:08 +0200."
             <018001c0b922$b58b5d50$0900a8c0@SPIFF> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>  
            <018001c0b922$b58b5d50$0900a8c0@SPIFF> 
Message-ID: <200103301440.JAA23550@cj20424-a.reston1.va.home.com>

> guido wrote:
> > In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> > __UNDERSCORE__ CONVENTION!
> 
> is the "__version__" convention documented somewhere?

This is a trick question, right?  :-)

__version__ may not be documented but is in de-facto use.  Folks
introducing other names (e.g. __author__, __credits__) should really
consider a PEP before grabbing a piece of the namespace.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Fri Mar 30 17:10:17 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 17:10:17 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>  
	            <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <3AC4A1D9.9D4C5BF7@lemburg.com>

Guido van Rossum wrote:
> 
> > So you are suggesting that we change all our code to something like:
> >
> > __enable_debug__ = 0 # set to 0 for production mode
> >
> > ...
> >
> > if __debug__ and __enable_debug__:
> >    print 'debugging information'
> >
> > ...
> 
> I can't suggest anything, because I have no idea what semantics you
> are assuming for __debug__ here, and I have no idea what you want with
> that code.  Maybe you'll want to say "__debug__ = 1" even when you are
> in -O mode -- that will definitely not work!

I know, but that's what I'm expecting. The point was to be able
to disable debugging code when running Python in non-optimized mode.
We'd have to change our code and use a new variable to work
around the SyntaxError exception.

While this is not so much of a problem for new code, existing code
will break (ie. not byte-compile anymore) in Python 2.1. 

A warning would be OK, but adding yet another SyntaxError for previously 
perfectly valid code is not going to make the Python users out there 
very happy... the current situation with two different settings
in common use out there (Python 1.5.2 and 2.0) is already a pain
to maintain due to the issues on Windows platforms (due to DLL 
problems).

I don't think that introducing even more subtle problems in 2.1
is going to be well accepted by Joe User.
 
> The form above won't (currently) be optimized out -- only "if
> __debug__:" is optimized away, nothing more complicated (not even "if
> (__debug__):".

Ok, make the code look like this then:

if __debug__:
   if enable_debug:
       print 'debug info'
 
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!  Those names are reserved for the
> interpreter, and you risk that they will be assigned a different
> semantics in the future.

Hey, this was just an example... ;-)

> > I don't see the point in having to introduce a new variable
> > just to disable debugging code in Python code which does not
> > run under -O.
> >
> > What does defining __debug__ as read-only variable buy us
> > in the long term ?
> 
> It allows the compiler to assume that __debug__ is a built-in name.
> In the future, the __debug__ variable may become meaningless, as we
> develop more differentiated optimization options.
> 
> The *only* acceptable use for __debug__ is to get rid of code that is
> essentially an assertion but can't be spelled with just an assertion,
> e.g.
> 
> def f(L):
>     if __debug__:
>         # Assert L is a list of integers:
>         for item in L:
>             assert isinstance(item, type(1))
>     ...

Maybe just me, but I use __debug__ a lot to do extra logging or 
printing in my code too; not just for assertions.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From barry at digicool.com  Fri Mar 30 17:38:48 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Fri, 30 Mar 2001 10:38:48 -0500
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
	<200103301330.IAA23144@cj20424-a.reston1.va.home.com>
	<3AC48D63.A8AFA489@lemburg.com>
	<200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <15044.43144.133911.800065@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> The *only* acceptable use for __debug__ is to get rid of code
    GvR> that is essentially an assertion but can't be spelled with
    GvR> just an assertion, e.g.

Interestingly enough, last night Jim Fulton and I talked about a
situation where you might want asserts to survive running under -O,
because you want to take advantage of other optimizations, but you
still want to assert certain invariants in your code.

Of course, you can do this now by just not using the assert
statement.  So that's what we're doing, and for giggles we're multiply
inheriting the exception we raise from AssertionError and our own
exception.  What I think we'd prefer is a separate switch to control
optimization and the disabling of assert.

-Barry



From thomas.heller at ion-tof.com  Fri Mar 30 17:43:00 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 30 Mar 2001 17:43:00 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <0a8201c0b930$19fc0750$e000a8c0@thomasnotebook>

IMO the fix to this bug should also go into 2.0.1:

Bug id 231064, sys.path not set correctly in embedded python interpreter

which is fixed in revision 1.23 of PC/getpathp.c


Thomas Heller




From thomas at xs4all.net  Fri Mar 30 17:48:28 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 30 Mar 2001 17:48:28 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <15044.43144.133911.800065@anthem.wooz.org>; from barry@digicool.com on Fri, Mar 30, 2001 at 10:38:48AM -0500
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com> <15044.43144.133911.800065@anthem.wooz.org>
Message-ID: <20010330174828.K13066@xs4all.nl>

On Fri, Mar 30, 2001 at 10:38:48AM -0500, Barry A. Warsaw wrote:

> What I think we'd prefer is a separate switch to control
> optimization and the disabling of assert.

You mean something like

#!/usr/bin/python -fno-asserts -fno_debug_ -fdocstrings -fdeadbranch 

Right!-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Paul.Moore at uk.origin-it.com  Fri Mar 30 17:52:04 2001
From: Paul.Moore at uk.origin-it.com (Moore, Paul)
Date: Fri, 30 Mar 2001 16:52:04 +0100
Subject: [Python-Dev] PEP: Use site-packages on all platforms
Message-ID: <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com>

It was suggested that I post this to python-dev, as well as python-list and
the distutils SIG. I apologise if this is being done backwards? Should I get
a proper PEP number first, or is it appropriate to ask for initial comments
like this?

Paul

-----Original Message-----
From: Moore, Paul 
Sent: 30 March 2001 13:32
To: distutils-sig at python.org
Cc: 'python-list at python.org'
Subject: [Distutils] PEP: Use site-packages on all platforms


Attached is a first draft of a proposal to use the "site-packages" directory
for locally installed modules, on all platforms instead of just on Unix. If
the consensus is that this is a worthwhile proposal, I'll submit it as a
formal PEP.

Any advice or suggestions welcomed - I've never written a PEP before - I
hope I've got the procedure right...

Paul Moore

PEP: TBA
Title: Install local packages in site-packages on all platforms
Version $Revision$
Author: Paul Moore <gustav at morpheus.demon.co.uk>
Status: Draft
Type: Standards Track
Python-Version: 2.2
Created: 2001-03-30
Post-History: TBA

Abstract

    The standard Python distribution includes a directory Lib/site-packages,
    which is used on Unix platforms to hold locally-installed modules and
    packages. The site.py module distributed with Python includes support
for
    locating modules in this directory.

    This PEP proposes that the site-packages directory should be used
    uniformly across all platforms for locally installed modules.


Motivation

    On Windows platforms, the default setting for sys.path does not include
a
    directory suitable for users to install locally-developed modules. The
    "expected" location appears to be the directory containing the Python
    executable itself. Including locally developed code in the same
directory
    as installed executables is not good practice.

    Clearly, users can manipulate sys.path, either in a locally modified
    site.py, or in a suitable sitecustomize.py, or even via .pth files.
    However, there should be a standard location for such files, rather than
    relying on every individual site having to set their own policy.

    In addition, with distutils becoming more prevalent as a means of
    distributing modules, the need for a standard install location for
    distributed modules will become more common. It would be better to
define
    such a standard now, rather than later when more distutils-based
packages
    exist which will need rebuilding.

    It is relevant to note that prior to Python 2.1, the site-packages
    directory was not included in sys.path for Macintosh platforms. This has
    been changed in 2.1, and Macintosh includes sys.path now, leaving
Windows
    as the only major platform with no site-specific modules directory.


Implementation

    The implementation of this feature is fairly trivial. All that would be
    required is a change to site.py, to change the section setting sitedirs.
    The Python 2.1 version has

        if os.sep == '/':
            sitedirs = [makepath(prefix,
                                 "lib",
                                 "python" + sys.version[:3],
                                 "site-packages"),
                        makepath(prefix, "lib", "site-python")]
        elif os.sep == ':':
            sitedirs = [makepath(prefix, "lib", "site-packages")]
        else:
            sitedirs = [prefix]

    A suitable change would be to simply replace the last 4 lines with

        else:
            sitedirs = [makepath(prefix, "lib", "site-packages")]

    Changes would also be required to distutils, in the sysconfig.py file.
It
    is worth noting that this file does not seem to have been updated in
line
    with the change of policy on the Macintosh, as of this writing.

Notes

    1. It would be better if this change could be included in Python 2.1, as
       changing something of this nature is better done sooner, rather than
       later, to reduce the backward-compatibility burden. This is extremely
       unlikely to happen at this late stage in the release cycle, however.

    2. This change does not preclude packages using the current location -
       the change only adds a directory to sys.path, it does not remove
       anything.

    3. In the Windows distribution of Python 2.1 (beta 1), the
       Lib\site-packages directory has been removed. It would need to be
       reinstated.


Copyright

    This document has been placed in the public domain.

_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG at python.org
http://mail.python.org/mailman/listinfo/distutils-sig



From mal at lemburg.com  Fri Mar 30 18:09:26 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 18:09:26 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com> <15044.43144.133911.800065@anthem.wooz.org> <20010330174828.K13066@xs4all.nl>
Message-ID: <3AC4AFB6.23A17755@lemburg.com>

Thomas Wouters wrote:
> 
> On Fri, Mar 30, 2001 at 10:38:48AM -0500, Barry A. Warsaw wrote:
> 
> > What I think we'd prefer is a separate switch to control
> > optimization and the disabling of assert.
> 
> You mean something like
> 
> #!/usr/bin/python -fno-asserts -fno_debug_ -fdocstrings -fdeadbranch

Sounds like a good idea, but how do you tell the interpreter
which asserts to leave enabled and which to remove from the 
code ?

In general, I agree, though: a more fine grained control
over optimizations would be a Good Thing (even more since we
are talking about non-existing code analysis tools here ;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From paul at pfdubois.com  Fri Mar 30 19:01:39 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Fri, 30 Mar 2001 09:01:39 -0800
Subject: [Python-Dev] Assigning to __debug__
Message-ID: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>

FWIW, this change broke a lot of my code and it took an hour or two to fix
it. I too was misled by the wording when __debug__ was introduced. I could
swear there were even examples of assigning to it, but maybe I'm dreaming.
Anyway, I thought I could.

Regardless of my delusions, this is another change that breaks code in the
middle of a beta cycle. I think that is not a good thing. It is one thing
when one goes to get a new beta or alpha; you expect to spend some time
then. It is another when one has been a good soldier and tried the beta and
is now using it for routine work and updating to a new version of it breaks
something because someone thought it ought to be broken. (If I don't use it
for my work I certainly won't find any problems with it). I realize that
this can't be a hard and fast rule but I think this one in particular
deserves warning status now and change in 2.2.




From barry at digicool.com  Fri Mar 30 19:16:28 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Fri, 30 Mar 2001 12:16:28 -0500
Subject: [Python-Dev] Assigning to __debug__
References: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>
Message-ID: <15044.49004.757215.882179@anthem.wooz.org>

>>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:

    PFD> Regardless of my delusions, this is another change that
    PFD> breaks code in the middle of a beta cycle.

I agree with Paul.  It's too late in the beta cycle to break code, and
I /also/ dimly remember assignment to __debug__ being semi-blessed.

Let's make it a warning or revert the change.

-Barry



From guido at digicool.com  Fri Mar 30 19:19:31 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:19:31 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 10:38:48 EST."
             <15044.43144.133911.800065@anthem.wooz.org> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>  
            <15044.43144.133911.800065@anthem.wooz.org> 
Message-ID: <200103301719.MAA24153@cj20424-a.reston1.va.home.com>

>     GvR> The *only* acceptable use for __debug__ is to get rid of code
>     GvR> that is essentially an assertion but can't be spelled with
>     GvR> just an assertion, e.g.
> 
> Interestingly enough, last night Jim Fulton and I talked about a
> situation where you might want asserts to survive running under -O,
> because you want to take advantage of other optimizations, but you
> still want to assert certain invariants in your code.
> 
> Of course, you can do this now by just not using the assert
> statement.  So that's what we're doing, and for giggles we're multiply
> inheriting the exception we raise from AssertionError and our own
> exception.  What I think we'd prefer is a separate switch to control
> optimization and the disabling of assert.

That's one of the things I was alluding to when I talked about more
diversified control over optimizations.  I guess then the __debug__
variable would indicate whether or not assertions are turned on;
something else would let you query the compiler's optimization level.
But assigning to __debug__ still wouldn't do what you wanted (unless
we decided to *make* this the way to turn assertions on or off in a
module -- but since this is a compile-time thing, it would require
that the rhs of the assignment was a constant).

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar 30 19:37:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:37:37 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 09:01:39 PST."
             <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com> 
References: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com> 
Message-ID: <200103301737.MAA24325@cj20424-a.reston1.va.home.com>

> FWIW, this change broke a lot of my code and it took an hour or two to fix
> it. I too was misled by the wording when __debug__ was introduced. I could
> swear there were even examples of assigning to it, but maybe I'm dreaming.
> Anyway, I thought I could.
> 
> Regardless of my delusions, this is another change that breaks code in the
> middle of a beta cycle. I think that is not a good thing. It is one thing
> when one goes to get a new beta or alpha; you expect to spend some time
> then. It is another when one has been a good soldier and tried the beta and
> is now using it for routine work and updating to a new version of it breaks
> something because someone thought it ought to be broken. (If I don't use it
> for my work I certainly won't find any problems with it). I realize that
> this can't be a hard and fast rule but I think this one in particular
> deserves warning status now and change in 2.2.

OK, this is the second confirmed report of broken 3rd party code, so
we'll change this into a warning.  Jeremy, that should be easy, right?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar 30 19:41:41 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:41:41 -0500
Subject: [Python-Dev] PEP: Use site-packages on all platforms
In-Reply-To: Your message of "Fri, 30 Mar 2001 16:52:04 +0100."
             <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com> 
References: <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com> 
Message-ID: <200103301741.MAA24378@cj20424-a.reston1.va.home.com>

I think this is a good idea.  Submit the PEP to Barry!

I doubt that we can introduce this into Python 2.1 this late in the
release cycle.  Would that be a problem?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Fri Mar 30 20:31:31 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 30 Mar 2001 20:31:31 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <200103301330.IAA23144@cj20424-a.reston1.va.home.com> (message
	from Guido van Rossum on Fri, 30 Mar 2001 08:30:18 -0500)
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
Message-ID: <200103301831.f2UIVVm01525@mira.informatik.hu-berlin.de>

> I checked in a fix to IDLE too, but it seems you were using an
> externally-installed version of IDLE.

Sorry about that, I used actually one from CVS: with a sticky 2.0 tag
:-(

> Assigning to __debug__ was never well-defined.  You used it at your
> own risk.

When __debug__ was first introduced, the NEWS entry read

# Without -O, the assert statement actually generates code that first
# checks __debug__; if this variable is false, the assertion is not
# checked.  __debug__ is a built-in variable whose value is
# initialized to track the -O flag (it's true iff -O is not
# specified).  With -O, no code is generated for assert statements,
# nor for code of the form ``if __debug__: <something>''.

So it clearly says that it is a variable, and that assert will check
its value at runtime. I can't quote any specific messages, but I
recall that you've explained it that way also in the public.

Regards,
Martin



From tim.one at home.com  Fri Mar 30 22:17:00 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 30 Mar 2001 15:17:00 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <018001c0b922$b58b5d50$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFMJJAA.tim.one@home.com>

[Guido]
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!

[/F]
> is the "__version__" convention documented somewhere?

In the Language Reference manual, section "Reserved classes of identifiers",
middle line of the table.  It would benefit from more words, though (it just
says "System-defined name" now, and hostile users are known to have trouble
telling themselves apart from "the system" <wink>).




From tim.one at home.com  Fri Mar 30 22:30:53 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 30 Mar 2001 15:30:53 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <200103301831.f2UIVVm01525@mira.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFPJJAA.tim.one@home.com>

Take a trip down memory lane:

    http://groups.yahoo.com/group/python-list/message/19647

That's the c.l.py msg in which Guido first introduced the idea of __debug__
(and DAMN was searching life easier before DejaNews lost its memory!).

The debate immediately following that (cmdline arguments and all) is being
reinvented here now.

Nothing actually changed from Guido's first proposal (above), except that he
gave up his opposition to making "assert" a reserved word (for which
far-seeing flexibility I am still most grateful), and he actually implemented
the "PS here's a variant" flavor.

I wasn't able to find anything in that debate where Guido explicitly said you
couldn't bind __debug__ yourself, but neither could I find anything saying
you could, and I believe him when he says "no binding" was the *intent*
(that's most consistent with everything he said at the time).

those-who-don't-remember-the-past-are-doomed-to-read-me-nagging-them-
    about-it<wink>-ly y'rs  - tim




From clee at gnwy100.wuh.wustl.edu  Sat Mar 31 17:08:15 2001
From: clee at gnwy100.wuh.wustl.edu (Christopher Lee)
Date: Sat, 31 Mar 2001 09:08:15 -0600 (CST)
Subject: [Python-Dev] submitted patch to linuxaudiodev
Message-ID: <15045.62175.301007.35652@gnwy100.wuh.wustl.edu>

I'm a long-time listener/first-time caller and would like to know what I
should do to have my patch examined.  I've included a description of the
patch below.

Cheers,

-chris

-----------------------------------------------------------------------------
[reference: python-Patches #412553]

Problem:

test_linuxaudiodev.py  failed with "Resource temporarily busy message"
(under the cvs version of python)

Analysis:

The lad_write() method attempts to write continuously to /dev/dsp (or 
equivalent); when the audio buffer fills, write() returns an error code and
errorno is set to EAGAIN, indicating that the device buffer is full.  The
lad_write() interprets this as an error and instead of trying to write
again returns NULL.

Solution:

Use select() to check when the audio device becomes writable and test for
EAGAIN after doing a write().  I've submitted patch #412553 that implements
this solution. (use python21-lihnuxaudiodev.c-version2.diff).  With this
patch, test_linuxaudiodev.py passes.  This patch may also be relevant for
the python 2.0.1 bugfix release.


System configuration:

linux kernel 2.4.2 and 2.4.3 SMP on a dual processor i686 with the
soundblaster live value soundcard.





From tim.one at home.com  Thu Mar  1 00:01:34 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 18:01:34 -0500
Subject: [Python-Dev] Very recent test_global failure
In-Reply-To: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>

> Just fixed.

Not fixed; can no longer compile Python:

compile.c
C:\Code\python\dist\src\Python\compile.c(4184) :
    error C2065: 'DEF_BOUND' : undeclared identifier




From ping at lfw.org  Thu Mar  1 00:11:59 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 15:11:59 -0800 (PST)
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <Pine.LNX.4.10.10102270054110.21681-100000@localhost>
Message-ID: <Pine.LNX.4.10.10102281508520.21681-100000@localhost>

Hi again.

On Tue, 27 Feb 2001, Ka-Ping Yee wrote:
> 
> 1.  The error message for UnboundLocalError isn't really accurate.
[...]
>         UnboundLocalError: local name 'x' is not defined

I'd like to check in this change today to make it into the beta.
It's a tiny change, shouldn't break anything as i don't see how
code would rely on the wording of the message, and makes the
message more accurate.  Lib/test/test_scope.py checks for the
error but does not rely on its wording.

If i don't see objections i'll do this tonight.  I hope this is
minor enough not to be a violation of etiquette.


-- ?!ng




From tim.one at home.com  Thu Mar  1 00:13:04 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 18:13:04 -0500
Subject: [Python-Dev] Very recent test_global failure
In-Reply-To: <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAENOJCAA.tim.one@home.com>

> Oops.  Missed a checkin to symtable.h.
>
> unix-users-prepare-to-recompile-everything-ly y'rs,
> Jeremy

Got that patch, everything compiles now, but test_global still fails.  Are
we, perhaps, missing an update to test_global's expected-output file too?




From tim.one at home.com  Thu Mar  1 00:21:15 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 18:21:15 -0500
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <Pine.LNX.4.10.10102281508520.21681-100000@localhost>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com>

[Ka-Ping Yee]
> On Tue, 27 Feb 2001, Ka-Ping Yee wrote:
> >
> > 1.  The error message for UnboundLocalError isn't really accurate.
> [...]
> >         UnboundLocalError: local name 'x' is not defined
>
> I'd like to check in this change today to make it into the beta.
> It's a tiny change, shouldn't break anything as i don't see how
> code would rely on the wording of the message, and makes the
> message more accurate.  Lib/test/test_scope.py checks for the
> error but does not rely on its wording.
>
> If i don't see objections i'll do this tonight.  I hope this is
> minor enough not to be a violation of etiquette.

Sorry, but I really didn't like this change.  You had to contrive a test case
using "del" for the old

    local variable 'x' referenced before assignment

msg to appear inaccurate the way you read it.  The old msg is much more
on-target 99.999% of the time than just saying "not defined", in
non-contrived test cases.  Even in the  "del" case, it's *still* the case
that the vrbl was referenced before assignment (but after "del").

So -1, on the grounds that the new msg is worse (because less specific)
almost all the time.




From guido at digicool.com  Thu Mar  1 00:25:30 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 18:25:30 -0500
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: Your message of "Wed, 28 Feb 2001 15:11:59 PST."
             <Pine.LNX.4.10.10102281508520.21681-100000@localhost> 
References: <Pine.LNX.4.10.10102281508520.21681-100000@localhost> 
Message-ID: <200102282325.SAA31347@cj20424-a.reston1.va.home.com>

> On Tue, 27 Feb 2001, Ka-Ping Yee wrote:
> > 
> > 1.  The error message for UnboundLocalError isn't really accurate.
> [...]
> >         UnboundLocalError: local name 'x' is not defined
> 
> I'd like to check in this change today to make it into the beta.
> It's a tiny change, shouldn't break anything as i don't see how
> code would rely on the wording of the message, and makes the
> message more accurate.  Lib/test/test_scope.py checks for the
> error but does not rely on its wording.
> 
> If i don't see objections i'll do this tonight.  I hope this is
> minor enough not to be a violation of etiquette.

+1, but first address the comments about test_inspect.py with -O.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From nas at arctrix.com  Thu Mar  1 00:30:23 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Wed, 28 Feb 2001 15:30:23 -0800
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com>; from tim.one@home.com on Wed, Feb 28, 2001 at 06:21:15PM -0500
References: <Pine.LNX.4.10.10102281508520.21681-100000@localhost> <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com>
Message-ID: <20010228153023.A5998@glacier.fnational.com>

On Wed, Feb 28, 2001 at 06:21:15PM -0500, Tim Peters wrote:
> So -1, on the grounds that the new msg is worse (because less specific)
> almost all the time.

I too vote -1 on the proposed new message (but not -1 on changing
to current message).

  Neil



From guido at digicool.com  Thu Mar  1 00:37:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 18:37:01 -0500
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: Your message of "Wed, 28 Feb 2001 18:21:15 EST."
             <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCCENPJCAA.tim.one@home.com> 
Message-ID: <200102282337.SAA31934@cj20424-a.reston1.va.home.com>

Based on Tim's comment I change my +1 into a -1.  I had forgotten the
context.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Thu Mar  1 01:02:39 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 19:02:39 -0500
Subject: [Python-Dev] New fatal error in toaiff.py
Message-ID: <LNBBLJKPBEHFEDALKOLCAEOFJCAA.tim.one@home.com>

>python
Python 2.1a2 (#10, Feb 28 2001, 14:06:44) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import toaiff
Fatal Python error: unknown scope for _toaiff in ?(0) in
    c:\code\python\dist\src\lib\toaiff.py

abnormal program termination

>




From ping at lfw.org  Thu Mar  1 01:13:40 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 16:13:40 -0800 (PST)
Subject: [Python-Dev] pydoc for CLI-less platforms
Message-ID: <Pine.LNX.4.10.10102281605370.21681-100000@localhost>

For platforms without a command-line like Windows and Mac,
pydoc will probably be used most often as a web server.
The version in CVS right now runs the server invisibly in
the background.  I just added a little GUI to control it
but i don't have an available Windows platform to test on
right now.  If you happen to have a few minutes to spare
and Windows 9x/NT/2k or a Mac, i would really appreciate
if you could give

    http://www.lfw.org/python/pydoc.py

a quick whirl.  It is intended to be invoked on Windows
platforms eventually as pydoc.pyw, so ignore the DOS box
that appears and let me know if the GUI works and behaves
sensibly for you.  When it's okay, i'll check it in.

Many thanks,


-- ?!ng


Windows and Mac compatibility changes:
    handle both <function foo at 0x827a18> and <function foo at 005D7C80>
    normalize case of paths on sys.path to get rid of duplicates
    change 'localhost' to '127.0.0.1' (Mac likes this better)
    add a tiny GUI for stopping the web server




From ping at lfw.org  Thu Mar  1 01:31:19 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Wed, 28 Feb 2001 16:31:19 -0800 (PST)
Subject: [Python-Dev] Re: A few small issues
In-Reply-To: <200102282325.SAA31347@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10102281630330.21681-100000@localhost>

On Wed, 28 Feb 2001, Guido van Rossum wrote:
> +1, but first address the comments about test_inspect.py with -O.

Okay, will do (will fix test_inspect, won't change UnboundLocalError).


-- ?!ng




From pedroni at inf.ethz.ch  Thu Mar  1 01:57:45 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 01:57:45 +0100
Subject: [Python-Dev] nested scopes. global: have I got it right?
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>

Hi. Is the following true?

PEP227 states:
"""
If the global statement occurs within a block, all uses of the
name specified in the statement refer to the binding of that name
in the top-level namespace.
"""

but this is a bit ambiguous, because the global decl (I imagine for
backw-compatibility)
does not affect the code blocks of nested (func) definitions. So

x=7
def f():
  global x
  def g():
    exec "x=3"
    return x
  print g()

f()

prints 3, not 7.


PS: this improve backw-compatibility but the PEP is ambiguous or block concept
does
not imply nested definitions(?). This affects only special cases but it is
quite strange in presence
of nested scopes, having decl that do not extend to inner scopes.




From guido at digicool.com  Thu Mar  1 02:08:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 20:08:32 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 01:57:45 +0100."
             <000d01c0a1ea$a1d53e60$f55821c0@newmexico> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>  
            <000d01c0a1ea$a1d53e60$f55821c0@newmexico> 
Message-ID: <200103010108.UAA00516@cj20424-a.reston1.va.home.com>

> Hi. Is the following true?
> 
> PEP227 states:
> """
> If the global statement occurs within a block, all uses of the
> name specified in the statement refer to the binding of that name
> in the top-level namespace.
> """
> 
> but this is a bit ambiguous, because the global decl (I imagine for
> backw-compatibility)
> does not affect the code blocks of nested (func) definitions. So
> 
> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
> 
> f()
> 
> prints 3, not 7.

Unclear whether this should change.  The old rule can also be read as
"you have to repeat 'global' for a variable in each scope where you
intend to assign to it".

> PS: this improve backw-compatibility but the PEP is ambiguous or
> block concept does not imply nested definitions(?). This affects
> only special cases but it is quite strange in presence of nested
> scopes, having decl that do not extend to inner scopes.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pedroni at inf.ethz.ch  Thu Mar  1 02:24:53 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 02:24:53 +0100
Subject: [Python-Dev] nested scopes. global: have I got it right?
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>             <000d01c0a1ea$a1d53e60$f55821c0@newmexico>  <200103010108.UAA00516@cj20424-a.reston1.va.home.com>
Message-ID: <005301c0a1ee$6c30cdc0$f55821c0@newmexico>

I didn't want to start a discussion, I was more concerned if I got the semantic
(that I should impl) right.
So:
  x=7
  def f():
     x=1
     def g():
       global x
       def h(): return x
       return h()
     return g()

will print 1. Ok.

regards.

PS: I tried this with a2 and python just died, I imagine, this has been fixed.





From guido at digicool.com  Thu Mar  1 02:42:49 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Feb 2001 20:42:49 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 02:24:53 +0100."
             <005301c0a1ee$6c30cdc0$f55821c0@newmexico> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <200103010108.UAA00516@cj20424-a.reston1.va.home.com>  
            <005301c0a1ee$6c30cdc0$f55821c0@newmexico> 
Message-ID: <200103010142.UAA00686@cj20424-a.reston1.va.home.com>

> I didn't want to start a discussion, I was more concerned if I got
> the semantic (that I should impl) right.
> So:
>   x=7
>   def f():
>      x=1
>      def g():
>        global x
>        def h(): return x
>        return h()
>      return g()

and then print f() as main, right?

> will print 1. Ok.
> 
> regards.

Argh!  I honestly don't know what this ought to do.  Under the rules
as I currently think of them this would print 1.  But that's at least
surprising, so maybe we'll have to revisit this.

Jeremy, also please note that if I add "from __future__ import
nested_scopes" to the top, this dumps core, saying: 

    lookup 'x' in g 2 -1
    Fatal Python error: com_make_closure()
    Aborted (core dumped)

Maybe you can turn this into a regular error? <0.5 wink>

> PS: I tried this with a2 and python just died, I imagine, this has
> been fixed.

Seems so. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Thu Mar  1 03:11:25 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Feb 2001 21:11:25 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEOMJCAA.tim.one@home.com>

[Samuele Pedroni]
> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
>
> f()
>
> prints 3, not 7.

Note the the Ref Man (section on the global stmt) adds some more wrinkles:

    ...
    global is a directive to the parser.  It applies only to code
    parsed at the same time as the global statement.  In particular,
    a global statement contained in an exec statement does not
    affect the code block containing the exec statement, and code
    contained in an exec statement is unaffected by global statements
    in the code containing the exec statement.  The same applies to the
    eval(), execfile() and compile() functions.


From Jason.Tishler at dothill.com  Thu Mar  1 03:44:47 2001
From: Jason.Tishler at dothill.com (Jason Tishler)
Date: Wed, 28 Feb 2001 21:44:47 -0500
Subject: [Python-Dev] Re: Case-sensitive import
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com>; from tim.one@home.com on Wed, Feb 28, 2001 at 05:21:02PM -0500
References: <20010228151728.Q449@dothill.com> <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com>
Message-ID: <20010228214447.I252@dothill.com>

Tim,

On Wed, Feb 28, 2001 at 05:21:02PM -0500, Tim Peters wrote:
> And thank you for your Cygwin work --

Your welcome -- I appreciate the willingness of the core Python team to
consider Cygwin related patches.

> someday I hope to use Cygwin for more
> than just running "patch" on this box <sigh> ...

Be careful!  First, you may use grep occasionally.  Next, you may find
yourself writing shell scripts.  Before you know it, you have crossed
over to the Unix side.  You have been warned! :,)

Thanks,
Jason

-- 
Jason Tishler
Director, Software Engineering       Phone: +1 (732) 264-8770 x235
Dot Hill Systems Corp.               Fax:   +1 (732) 264-8798
82 Bethany Road, Suite 7             Email: Jason.Tishler at dothill.com
Hazlet, NJ 07730 USA                 WWW:   http://www.dothill.com



From greg at cosc.canterbury.ac.nz  Thu Mar  1 03:58:06 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 01 Mar 2001 15:58:06 +1300 (NZDT)
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEOMJCAA.tim.one@home.com>
Message-ID: <200103010258.PAA02214@s454.cosc.canterbury.ac.nz>

Quoth the Samuele Pedroni:

> In particular,
> a global statement contained in an exec statement does not
> affect the code block containing the exec statement, and code
> contained in an exec statement is unaffected by global statements
> in the code containing the exec statement.

I think this is broken. As long as we're going to allow
exec-with-1-arg to implicitly mess with the current namespace,
names in the exec'ed statement should have the same meanings
as they do in the surrounding statically-compiled code.

So, global statements in the surrounding scope should be honoured
in the exec'ed statement, and global statements should be disallowed
within the exec'ed statement.

Better still, get rid of both exec-with-1-arg and locals()
altogether...

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From fdrake at users.sourceforge.net  Thu Mar  1 06:20:23 2001
From: fdrake at users.sourceforge.net (Fred L. Drake)
Date: Wed, 28 Feb 2001 21:20:23 -0800
Subject: [Python-Dev] [development doc updates]
Message-ID: <E14YLVn-0003XL-00@usw-pr-shell2.sourceforge.net>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/





From jeremy at alum.mit.edu  Thu Mar  1 06:49:33 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 00:49:33 -0500 (EST)
Subject: [Python-Dev] code objects leakin'
Message-ID: <15005.58093.314004.571576@w221.z064000254.bwi-md.dsl.cnc.net>

It looks like code objects are leaked with surprising frequency.  I
added a simple counter that records all code object allocs and
deallocs.  For many programs, the net is zero.  For some, including
setup.py and the regression test, it's much larger than zero.

I've got no time to look at this before the beta, but perhaps someone
else does.  Even if it can't be fixed, it would be helpful to know
what's going wrong.

I am fairly certain that recursive functions are being leaked, even
after patching function object's traverse function to visit the
func_closure.

Jeremy



From jeremy at alum.mit.edu  Thu Mar  1 07:00:25 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 01:00:25 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Objects funcobject.c,2.35,2.36
In-Reply-To: <E14YMEZ-0006od-00@usw-pr-cvs1.sourceforge.net>
References: <E14YMEZ-0006od-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <15005.58745.306448.535530@w221.z064000254.bwi-md.dsl.cnc.net>

This change does not appear to solve the leaks, but it seems
necessary for correctness.

Jeremy



From martin at loewis.home.cs.tu-berlin.de  Thu Mar  1 07:16:59 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 1 Mar 2001 07:16:59 +0100
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
Message-ID: <200103010616.f216Gx301229@mira.informatik.hu-berlin.de>

> but where's the patch?

Argh. It's now at http://www.informatik.hu-berlin.de/~loewis/python/directive.diff

> other tools that parse Python will have to be adapted.

Yes, that's indeed a problem. Initially, that syntax will be used only
to denote modules that use nested scopes, so those tools would have
time to adjust.

> The __future__ hack doesn't need that.

If it is *just* parsing, then yes. If it does any further analysis
(e.g. "find definition (of a variable)" aka "find assignments to"), or
if they inspect code objects, these tools again need to be adopted.

Regards,
Martin




From thomas at xs4all.net  Thu Mar  1 08:29:09 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 08:29:09 +0100
Subject: [Python-Dev] Re: Case-sensitive import
In-Reply-To: <20010228214447.I252@dothill.com>; from Jason.Tishler@dothill.com on Wed, Feb 28, 2001 at 09:44:47PM -0500
References: <20010228151728.Q449@dothill.com> <LNBBLJKPBEHFEDALKOLCIENHJCAA.tim.one@home.com> <20010228214447.I252@dothill.com>
Message-ID: <20010301082908.I9678@xs4all.nl>

On Wed, Feb 28, 2001 at 09:44:47PM -0500, Jason Tishler wrote:

[ Tim Peters ]
> > someday I hope to use Cygwin for more
> > than just running "patch" on this box <sigh> ...

> Be careful!  First, you may use grep occasionally.  Next, you may find
> yourself writing shell scripts.  Before you know it, you have crossed
> over to the Unix side.  You have been warned! :,)

Well, Tim used to be a true Jedi Knight, but was won over by the dark side.
His name keeps popping up in decidedly unixlike tools, like Emacs' 'python'
mode. It is certain that his defection brought balance to the force (or at
least to Python) but we'd still like to rescue him before he is forced to
sacrifice himself to save Python. ;)

Lets-just-call-him-anatim-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Thu Mar  1 12:57:08 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 1 Mar 2001 12:57:08 +0100
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
References: <200102282219.f1SMJDg04557@mira.informatik.hu-berlin.de>  <200102282248.RAA31007@cj20424-a.reston1.va.home.com>
Message-ID: <02c901c0a246$bef128e0$0900a8c0@SPIFF>

Guido wrote:
> There's one downside to the "directive" syntax: other tools that parse
> Python will have to be adapted.  The __future__ hack doesn't need
> that.

also:

- "from __future__" gives a clear indication that you're using
  a non-standard feature.  "directive" is too generic.

- everyone knows how to mentally parse from-import state-
  ments, and that they may have side effects.  nobody knows
  what "directive" does.

- pragmas suck.  we need much more discussion (and calender
  time) before adding a pragma-like directive to Python.

- "from __future__" makes me smile.  "directive" doesn't.

-1, for now.

Cheers /F




From guido at digicool.com  Thu Mar  1 15:29:10 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 09:29:10 -0500
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: Your message of "Thu, 01 Mar 2001 15:58:06 +1300."
             <200103010258.PAA02214@s454.cosc.canterbury.ac.nz> 
References: <200103010258.PAA02214@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103011429.JAA03471@cj20424-a.reston1.va.home.com>

> Quoth the Samuele Pedroni:
> 
> > In particular,
> > a global statement contained in an exec statement does not
> > affect the code block containing the exec statement, and code
> > contained in an exec statement is unaffected by global statements
> > in the code containing the exec statement.
> 
> I think this is broken. As long as we're going to allow
> exec-with-1-arg to implicitly mess with the current namespace,
> names in the exec'ed statement should have the same meanings
> as they do in the surrounding statically-compiled code.
> 
> So, global statements in the surrounding scope should be honoured
> in the exec'ed statement, and global statements should be disallowed
> within the exec'ed statement.
> 
> Better still, get rid of both exec-with-1-arg and locals()
> altogether...

That's my plan, so I suppose we should not bother to "fix" the broken
behavior that has been around from the start.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Thu Mar  1 15:55:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 09:55:01 -0500
Subject: [Python-Dev] PEP 236: an alternative to the __future__ syntax
In-Reply-To: Your message of "Thu, 01 Mar 2001 07:16:59 +0100."
             <200103010616.f216Gx301229@mira.informatik.hu-berlin.de> 
References: <200103010616.f216Gx301229@mira.informatik.hu-berlin.de> 
Message-ID: <200103011455.JAA04064@cj20424-a.reston1.va.home.com>

> Argh. It's now at http://www.informatik.hu-berlin.de/~loewis/python/directive.diff
> 
> > other tools that parse Python will have to be adapted.
> 
> Yes, that's indeed a problem. Initially, that syntax will be used only
> to denote modules that use nested scopes, so those tools would have
> time to adjust.
> 
> > The __future__ hack doesn't need that.
> 
> If it is *just* parsing, then yes. If it does any further analysis
> (e.g. "find definition (of a variable)" aka "find assignments to"), or
> if they inspect code objects, these tools again need to be adopted.

This is just too late for the first beta.  But we'll consider it for
beta 2!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pedroni at inf.ethz.ch  Thu Mar  1 16:33:14 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 16:33:14 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011533.QAA06035@core.inf.ethz.ch>

Hi.

I read the following CVS log from Jeremy:

> Fix core dump in example from Samuele Pedroni:
> 
> from __future__ import nested_scopes
> x=7
> def f():
>     x=1
>     def g():
>         global x
>         def i():
>             def h():
>                 return x
>             return h()
>         return i()
>     return g()
> 
> print f()
> print x
> 
> This kind of code didn't work correctly because x was treated as free
> in i, leading to an attempt to load x in g to make a closure for i.
> 
> Solution is to make global decl apply to nested scopes unless their is
> an assignment.  Thus, x in h is global.
> 

Will that be the intended final semantic?

The more backw-compatible semantic would be for that code to print:
1
7
(I think this was the semantic Guido, was thinking of)

Now, if I have understood well, this prints
7
7

but if I put a x=666 in h this prints:
666
7

but the most natural (just IMHO) nesting semantic would be in that case to
print:
666
666
(so x is considered global despite the assignement, because decl extends to
enclosed scopes too).

I have no preference but I'm confused. Samuele Pedroni.




From guido at digicool.com  Thu Mar  1 16:42:55 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 10:42:55 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: Your message of "Thu, 01 Mar 2001 05:56:42 PST."
             <E14YTZS-0003kB-00@usw-pr-cvs1.sourceforge.net> 
References: <E14YTZS-0003kB-00@usw-pr-cvs1.sourceforge.net> 
Message-ID: <200103011542.KAA04518@cj20424-a.reston1.va.home.com>

Ping just checked in this:

> Log Message:
> Add __author__ and __credits__ variables.
> 
> 
> Index: tokenize.py
> ===================================================================
> RCS file: /cvsroot/python/python/dist/src/Lib/tokenize.py,v
> retrieving revision 1.19
> retrieving revision 1.20
> diff -C2 -r1.19 -r1.20
> *** tokenize.py	2001/03/01 04:27:19	1.19
> --- tokenize.py	2001/03/01 13:56:40	1.20
> ***************
> *** 10,14 ****
>   it produces COMMENT tokens for comments and gives type OP for all operators."""
>   
> ! __version__ = "Ka-Ping Yee, 26 October 1997; patched, GvR 3/30/98"
>   
>   import string, re
> --- 10,15 ----
>   it produces COMMENT tokens for comments and gives type OP for all operators."""
>   
> ! __author__ = 'Ka-Ping Yee <ping at lfw.org>'
> ! __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'
>   
>   import string, re

I'm slightly uncomfortable with the __credits__ variable inserted
here.  First of all, __credits__ doesn't really describe the
information given.  Second, doesn't this info belong in the CVS
history?  I'm not for including random extracts of a module's history
in the source code -- this is more likely than not to become out of
date.  (E.g. from the CVS log it's not clear why my contribution
deserves a mention while Tim's doesn't -- it looks like Tim probably
spent a lot more time thinking about it than I did.)

Anothor source of discomfort is that there's absolutely no standard
for this kind of meta-data variables.  We've got __version__, and I
believe we once agreed on that (in 1994 or so :-).  But __author__?
__credits__?  What next -- __cute_signoff__?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 17:10:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:10:28 -0500 (EST)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <200103011533.QAA06035@core.inf.ethz.ch>
References: <200103011533.QAA06035@core.inf.ethz.ch>
Message-ID: <15006.29812.95600.22223@w221.z064000254.bwi-md.dsl.cnc.net>

I'm not convinced there is a natural meaning for this, nor am I
certain that was is now implemented is the least unnatural meaning.

    from __future__ import nested_scopes
    x=7
    def f():
        x=1
        def g():
            global x
            def i():
                def h():
                    return x
                return h()
            return i()
        return g()
    
    print f()
    print x

prints:
    7
    7

I think the chief question is what 'global x' means without any other
reference to x in the same code block.  The other issue is whether a
global statement is a name binding operation of sorts.

If we had
        def g():
	    x = 2            # instead of global
            def i():
                def h():
                    return x
                return h()
            return i()

It is clear that x in h uses the binding introduced in g.

        def g():
            global x
	    x = 2
            def i():
                def h():
                    return x
                return h()
            return i()

Now that x is declared global, should the binding for x in g be
visible in h?  I think it should, because the alternative would be
more confusing.

    def f():
        x = 3
        def g():
            global x
	    x = 2
            def i():
                def h():
                    return x
                return h()
            return i()

If global x meant that the binding for x wasn't visible in nested
scopes, then h would use the binding for x introduced in f.  This is
confusing, because visual inspection shows that the nearest block with
an assignment to x is g.  (You might overlook the global x statement.)

The rule currently implemented is to use the binding introduced in the
nearest enclosing scope.  If the binding happens to be between the
name and the global namespace, that is the binding that is used.

Samuele noted that he thinks the most natural semantics would be for
global to extend into nested scopes.  I think this would be confusing
-- or at least I'm confused <wink>.  

        def g():
            global x
	    x = 2
            def i():
                def h():
                    x = 10
                    return x
                return h()
            return i()

In this case, he suggests that the assignment in h should affect the
global x.  I think this is incorrect because enclosing scopes should
only have an effect when variables are free.  By the normal Python
rules, x is not free in h because there is an assignment to x; x is
just a local.

Jeremy



From ping at lfw.org  Thu Mar  1 17:13:56 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 08:13:56 -0800 (PST)
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <200103011542.KAA04518@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org>

On Thu, 1 Mar 2001, Guido van Rossum wrote:
> I'm slightly uncomfortable with the __credits__ variable inserted
> here.  First of all, __credits__ doesn't really describe the
> information given.

I'll explain the motivation here.  I was going to write something
about this when i got up in the morning, but you've noticed before
i got around to it (and i haven't gone to sleep yet).

    - The __version__ variable really wasn't a useful place for
      this information.  The version of something really isn't
      the same as the author or the date it was created; it should
      be either a revision number from an RCS tag or a number
      updated periodically by the maintainer.  By separating out
      other kinds of information, we allow __version__ to retain
      its focused purpose.

    - The __author__ tag is a pretty standard piece of metadata
      among most kinds of documentation -- there are AUTHOR
      sections in almost all man pages, and similar "creator"
      information in just about every metadata standard for
      documents or work products of any kind.  Contact info and
      copyright info can go here.  This is important because it
      identifies a responsible party -- someone to ask questions
      of, and to send complaints, thanks, and patches to.  Maybe
      one day we can use it to help automate the process of
      assigning patches and directing feedback.

    - The __credits__ tag is a way of acknowledging others who
      contributed to the product.  It can be used to recount a
      little history, but the real motivation for including it
      is social engineering: i wanted to foster a stronger mutual
      gratification culture around Python by giving people a place
      to be generous with their acknowledgements.  It's always
      good to err on the side of generosity rather than stinginess
      when giving praise.  Open source is fueled in large part by
      egoboo, and if we can let everyone participate, peer-to-peer
      style rather than centralized, in patting others on the back,
      then all the better.  People do this in # comments anyway;
      the only difference now is that their notes are visible to pydoc.

> Second, doesn't this info belong in the CVS history?

__credits__ isn't supposed to be a change log; it's a reward
mechanism.  Or consider it ego-Napster, if you prefer.

Share the love. :)

> Anothor source of discomfort is that there's absolutely no standard
> for this kind of meta-data variables.

I think the behaviour of processing tools such as pydoc will
create a de-facto standard.  I was careful to respect __version__
in the ways that it is currently used, and i am humbly offering
these others in the hope that you will see why they are worth
having, too.



-- ?!ng

"If cryptography is outlawed, only QJVKN YFDLA ZBYCG HFUEG UFRYG..."




From guido at digicool.com  Thu Mar  1 17:30:53 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 11:30:53 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: Your message of "Thu, 01 Mar 2001 08:13:56 PST."
             <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org> 
References: <Pine.LNX.4.10.10103010752120.862-100000@skuld.kingmanhall.org> 
Message-ID: <200103011630.LAA04973@cj20424-a.reston1.va.home.com>

> On Thu, 1 Mar 2001, Guido van Rossum wrote:
> > I'm slightly uncomfortable with the __credits__ variable inserted
> > here.  First of all, __credits__ doesn't really describe the
> > information given.

Ping replied:
> I'll explain the motivation here.  I was going to write something
> about this when i got up in the morning, but you've noticed before
> i got around to it (and i haven't gone to sleep yet).
> 
>     - The __version__ variable really wasn't a useful place for
>       this information.  The version of something really isn't
>       the same as the author or the date it was created; it should
>       be either a revision number from an RCS tag or a number
>       updated periodically by the maintainer.  By separating out
>       other kinds of information, we allow __version__ to retain
>       its focused purpose.

Sure.

>     - The __author__ tag is a pretty standard piece of metadata
>       among most kinds of documentation -- there are AUTHOR
>       sections in almost all man pages, and similar "creator"
>       information in just about every metadata standard for
>       documents or work products of any kind.  Contact info and
>       copyright info can go here.  This is important because it
>       identifies a responsible party -- someone to ask questions
>       of, and to send complaints, thanks, and patches to.  Maybe
>       one day we can use it to help automate the process of
>       assigning patches and directing feedback.

No problem here.

>     - The __credits__ tag is a way of acknowledging others who
>       contributed to the product.  It can be used to recount a
>       little history, but the real motivation for including it
>       is social engineering: i wanted to foster a stronger mutual
>       gratification culture around Python by giving people a place
>       to be generous with their acknowledgements.  It's always
>       good to err on the side of generosity rather than stinginess
>       when giving praise.  Open source is fueled in large part by
>       egoboo, and if we can let everyone participate, peer-to-peer
>       style rather than centralized, in patting others on the back,
>       then all the better.  People do this in # comments anyway;
>       the only difference now is that their notes are visible to pydoc.

OK.  Then I think you goofed up in the __credits__ you actually
checked in for tokenize.py:

    __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'

I would have expected something like this:

    __credits__ = 'contributions: GvR, ESR, Tim Peters, Thomas Wouters, ' \
                  'Fred Drake, Skip Montanaro'

> > Second, doesn't this info belong in the CVS history?
> 
> __credits__ isn't supposed to be a change log; it's a reward
> mechanism.  Or consider it ego-Napster, if you prefer.
> 
> Share the love. :)

You west coasters. :-)

> > Anothor source of discomfort is that there's absolutely no standard
> > for this kind of meta-data variables.
> 
> I think the behaviour of processing tools such as pydoc will
> create a de-facto standard.  I was careful to respect __version__
> in the ways that it is currently used, and i am humbly offering
> these others in the hope that you will see why they are worth
> having, too.

What does pydoc do with __credits__?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 17:37:53 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:37:53 -0500 (EST)
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>
References: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>
Message-ID: <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "RT" == Robin Thomas <robin.thomas at starmedia.net> writes:

  RT> Using Python 2.0 on Win32. Am I the only person to be depressed
  RT> by the following behavior now that __getitem__ does the work of
  RT> __getslice__?

You may the only person to have tried it :-).

  RT> Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
  >>> d = {}
  >>> d[0:1] = 1
  >>> d
  {slice(0, 1, None): 1}

I think this should raise a TypeError (as you suggested later).

>>> del d[0:1]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: object doesn't support slice deletion

Jeremy



From pedroni at inf.ethz.ch  Thu Mar  1 17:53:43 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 17:53:43 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011653.RAA09025@core.inf.ethz.ch>

Hi.

Your rationale sounds ok.
We are just facing the oddities of the python rule - that assignment
indetifies locals - when extended to nested scopes new world.
(Everybody will be confused his own way ;), better write non confusing
code ;))
I think I should really learn to read code this way, and also
everybody coming from languages with explicit declarations:

is the semantic (expressed through bytecode instrs) right?

(I)
    from __future__ import nested_scopes
    x=7
    def f():
        #pseudo-local-decl x
        x=1
        def g():
            global x # global-decl x
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
        return g()
    
    print f()
    print x

(II)
        def g():
            #pseudo-local-decl x
	    x = 2            # instead of global
            def i():
                def h():
                    return x # => LOAD_DEREF (x from g)
                return h()
            return i()

(III)
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
(IV)           
    def f():
        # pseudo-local-decl x
        x = 3 # => STORE_FAST
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    return x # => LOAD_GLOBAL
                return h()
            return i()
(IV)
        def g():
            global x # global-decl x
	    x = 2 # => STORE_GLOBAL
            def i():
                def h():
                    # pseudo-local-decl x
                    x = 10   # => STORE_FAST
                    return x # => LOAD_FAST
                return h()
            return i()
If one reads also here the implicit local-decl, this is fine, otherwise this 
is confusing. It's a matter whether 'global' kills the local-decl only in one
scope or in the nesting too. I have no preference.


regards, Samuele Pedroni.




From jeremy at alum.mit.edu  Thu Mar  1 17:57:20 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 11:57:20 -0500 (EST)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <200103011653.RAA09025@core.inf.ethz.ch>
References: <200103011653.RAA09025@core.inf.ethz.ch>
Message-ID: <15006.32624.826559.907667@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "SP" == Samuele Pedroni <pedroni at inf.ethz.ch> writes:

  SP> If one reads also here the implicit local-decl, this is fine,
  SP> otherwise this is confusing. It's a matter whether 'global'
  SP> kills the local-decl only in one scope or in the nesting too. I
  SP> have no preference.

All your examples look like what is currently implemented.  My
preference is that global kills the local-decl only in one scope.
I'll stick with that unless Guido disagrees.

Jeremy



From pedroni at inf.ethz.ch  Thu Mar  1 18:04:56 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 1 Mar 2001 18:04:56 +0100 (MET)
Subject: [Python-Dev] just trying to catch up with the semantic
Message-ID: <200103011704.SAA09425@core.inf.ethz.ch>

[Jeremy] 
> All your examples look like what is currently implemented.  My
> preference is that global kills the local-decl only in one scope.
> I'll stick with that unless Guido disagrees.
At least this will break fewer code.

regards.




From ping at lfw.org  Thu Mar  1 18:11:28 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 09:11:28 -0800 (PST)
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <200103011630.LAA04973@cj20424-a.reston1.va.home.com>
Message-ID: <Pine.LNX.4.10.10103010909520.862-100000@skuld.kingmanhall.org>

On Thu, 1 Mar 2001, Guido van Rossum wrote:
> OK.  Then I think you goofed up in the __credits__ you actually
> checked in for tokenize.py:
> 
>     __credits__ = 'first version, 26 October 1997; patched, GvR 3/30/98'

Indeed, that was mindless copying.

> I would have expected something like this:
> 
>     __credits__ = 'contributions: GvR, ESR, Tim Peters, Thomas Wouters, ' \
>                   'Fred Drake, Skip Montanaro'

Sure.  Done.

> You west coasters. :-)

You forget that i'm a Canadian prairie boy at heart. :)

> What does pydoc do with __credits__?

They show up in a little section at the end of the document.


-- ?!ng

"If cryptography is outlawed, only QJVKN YFDLA ZBYCG HFUEG UFRYG..."




From esr at thyrsus.com  Thu Mar  1 18:47:51 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Thu, 1 Mar 2001 12:47:51 -0500
Subject: [Python-Dev] Finger error -- my apologies
Message-ID: <20010301124751.B24835@thyrsus.com>

I meant to accept this patch, but I think I rejected it instead.
Sorry, Ping.  Resubmit, plese, if I fooed up?
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

It is the assumption of this book that a work of art is a gift, not a
commodity.  Or, to state the modern case with more precision, that works of
art exist simultaneously in two "economies," a market economy and a gift
economy.  Only one of these is essential, however: a work of art can survive
without the market, but where there is no gift there is no art.
	-- Lewis Hyde, The Gift: Imagination and the Erotic Life of Property
-------------- next part --------------
An embedded message was scrubbed...
From: nobody <nobody at sourceforge.net>
Subject: [ python-Patches-405122 ] webbrowser fix
Date: Thu, 01 Mar 2001 06:03:54 -0800
Size: 2012
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010301/e4473d2d/attachment-0001.eml>

From jeremy at alum.mit.edu  Thu Mar  1 19:16:03 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 13:16:03 -0500 (EST)
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
Message-ID: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>

from __future__ import nested_scopes is accepted at the interactive
interpreter prompt but has no effect beyond the line on which it was
entered.  You could use it with lambdas entered following a
semicolon, I guess.

I would rather see the future statement take effect for the remained
of the interactive interpreter session.  I have included a first-cut
patch below that makes this possible, using an object called
PySessionState.  (I don't like the name, but don't have a better one;
PyCompilerFlags?)

The idea of the session state is to record information about the state
of an interactive session that may affect compilation.  The
state object is created in PyRun_InteractiveLoop() and passed all the
way through to PyNode_Compile().

Does this seem a reasonable approach?  Should I include it in the
beta?  Any name suggestions.

Jeremy


Index: Include/compile.h
===================================================================
RCS file: /cvsroot/python/python/dist/src/Include/compile.h,v
retrieving revision 2.27
diff -c -r2.27 compile.h
*** Include/compile.h	2001/02/28 01:58:08	2.27
--- Include/compile.h	2001/03/01 18:18:27
***************
*** 41,47 ****
  
  /* Public interface */
  struct _node; /* Declare the existence of this type */
! DL_IMPORT(PyCodeObject *) PyNode_Compile(struct _node *, char *);
  DL_IMPORT(PyCodeObject *) PyCode_New(
  	int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *,
  	PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *); 
--- 41,48 ----
  
  /* Public interface */
  struct _node; /* Declare the existence of this type */
! DL_IMPORT(PyCodeObject *) PyNode_Compile(struct _node *, char *,
! 					 PySessionState *);
  DL_IMPORT(PyCodeObject *) PyCode_New(
  	int, int, int, int, PyObject *, PyObject *, PyObject *, PyObject *,
  	PyObject *, PyObject *, PyObject *, PyObject *, int, PyObject *); 
Index: Include/pythonrun.h
===================================================================
RCS file: /cvsroot/python/python/dist/src/Include/pythonrun.h,v
retrieving revision 2.38
diff -c -r2.38 pythonrun.h
*** Include/pythonrun.h	2001/02/02 18:19:15	2.38
--- Include/pythonrun.h	2001/03/01 18:18:27
***************
*** 7,12 ****
--- 7,16 ----
  extern "C" {
  #endif
  
+ typedef struct {
+ 	int ss_nested_scopes;
+ } PySessionState;
+ 
  DL_IMPORT(void) Py_SetProgramName(char *);
  DL_IMPORT(char *) Py_GetProgramName(void);
  
***************
*** 25,31 ****
  DL_IMPORT(int) PyRun_SimpleString(char *);
  DL_IMPORT(int) PyRun_SimpleFile(FILE *, char *);
  DL_IMPORT(int) PyRun_SimpleFileEx(FILE *, char *, int);
! DL_IMPORT(int) PyRun_InteractiveOne(FILE *, char *);
  DL_IMPORT(int) PyRun_InteractiveLoop(FILE *, char *);
  
  DL_IMPORT(struct _node *) PyParser_SimpleParseString(char *, int);
--- 29,35 ----
  DL_IMPORT(int) PyRun_SimpleString(char *);
  DL_IMPORT(int) PyRun_SimpleFile(FILE *, char *);
  DL_IMPORT(int) PyRun_SimpleFileEx(FILE *, char *, int);
! DL_IMPORT(int) PyRun_InteractiveOne(FILE *, char *, PySessionState *);
  DL_IMPORT(int) PyRun_InteractiveLoop(FILE *, char *);
  
  DL_IMPORT(struct _node *) PyParser_SimpleParseString(char *, int);
Index: Python/compile.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/compile.c,v
retrieving revision 2.184
diff -c -r2.184 compile.c
*** Python/compile.c	2001/03/01 06:09:34	2.184
--- Python/compile.c	2001/03/01 18:18:28
***************
*** 471,477 ****
  static void com_assign(struct compiling *, node *, int, node *);
  static void com_assign_name(struct compiling *, node *, int);
  static PyCodeObject *icompile(node *, struct compiling *);
! static PyCodeObject *jcompile(node *, char *, struct compiling *);
  static PyObject *parsestrplus(node *);
  static PyObject *parsestr(char *);
  static node *get_rawdocstring(node *);
--- 471,478 ----
  static void com_assign(struct compiling *, node *, int, node *);
  static void com_assign_name(struct compiling *, node *, int);
  static PyCodeObject *icompile(node *, struct compiling *);
! static PyCodeObject *jcompile(node *, char *, struct compiling *,
! 			      PySessionState *);
  static PyObject *parsestrplus(node *);
  static PyObject *parsestr(char *);
  static node *get_rawdocstring(node *);
***************
*** 3814,3822 ****
  }
  
  PyCodeObject *
! PyNode_Compile(node *n, char *filename)
  {
! 	return jcompile(n, filename, NULL);
  }
  
  struct symtable *
--- 3815,3823 ----
  }
  
  PyCodeObject *
! PyNode_Compile(node *n, char *filename, PySessionState *sess)
  {
! 	return jcompile(n, filename, NULL, sess);
  }
  
  struct symtable *
***************
*** 3844,3854 ****
  static PyCodeObject *
  icompile(node *n, struct compiling *base)
  {
! 	return jcompile(n, base->c_filename, base);
  }
  
  static PyCodeObject *
! jcompile(node *n, char *filename, struct compiling *base)
  {
  	struct compiling sc;
  	PyCodeObject *co;
--- 3845,3856 ----
  static PyCodeObject *
  icompile(node *n, struct compiling *base)
  {
! 	return jcompile(n, base->c_filename, base, NULL);
  }
  
  static PyCodeObject *
! jcompile(node *n, char *filename, struct compiling *base,
! 	 PySessionState *sess)
  {
  	struct compiling sc;
  	PyCodeObject *co;
***************
*** 3864,3870 ****
  	} else {
  		sc.c_private = NULL;
  		sc.c_future = PyNode_Future(n, filename);
! 		if (sc.c_future == NULL || symtable_build(&sc, n) < 0) {
  			com_free(&sc);
  			return NULL;
  		}
--- 3866,3882 ----
  	} else {
  		sc.c_private = NULL;
  		sc.c_future = PyNode_Future(n, filename);
! 		if (sc.c_future == NULL) {
! 			com_free(&sc);
! 			return NULL;
! 		}
! 		if (sess) {
! 			if (sess->ss_nested_scopes)
! 				sc.c_future->ff_nested_scopes = 1;
! 			else if (sc.c_future->ff_nested_scopes)
! 				sess->ss_nested_scopes = 1;
! 		}
! 		if (symtable_build(&sc, n) < 0) {
  			com_free(&sc);
  			return NULL;
  		}
Index: Python/import.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/import.c,v
retrieving revision 2.169
diff -c -r2.169 import.c
*** Python/import.c	2001/03/01 08:47:29	2.169
--- Python/import.c	2001/03/01 18:18:28
***************
*** 608,614 ****
  	n = PyParser_SimpleParseFile(fp, pathname, Py_file_input);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, pathname);
  	PyNode_Free(n);
  
  	return co;
--- 608,614 ----
  	n = PyParser_SimpleParseFile(fp, pathname, Py_file_input);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, pathname, NULL);
  	PyNode_Free(n);
  
  	return co;
Index: Python/pythonrun.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Python/pythonrun.c,v
retrieving revision 2.125
diff -c -r2.125 pythonrun.c
*** Python/pythonrun.c	2001/02/28 20:58:04	2.125
--- Python/pythonrun.c	2001/03/01 18:18:28
***************
*** 37,45 ****
  static void initmain(void);
  static void initsite(void);
  static PyObject *run_err_node(node *n, char *filename,
! 			      PyObject *globals, PyObject *locals);
  static PyObject *run_node(node *n, char *filename,
! 			  PyObject *globals, PyObject *locals);
  static PyObject *run_pyc_file(FILE *fp, char *filename,
  			      PyObject *globals, PyObject *locals);
  static void err_input(perrdetail *);
--- 37,47 ----
  static void initmain(void);
  static void initsite(void);
  static PyObject *run_err_node(node *n, char *filename,
! 			      PyObject *globals, PyObject *locals,
! 			      PySessionState *sess);
  static PyObject *run_node(node *n, char *filename,
! 			  PyObject *globals, PyObject *locals,
! 			  PySessionState *sess);
  static PyObject *run_pyc_file(FILE *fp, char *filename,
  			      PyObject *globals, PyObject *locals);
  static void err_input(perrdetail *);
***************
*** 56,62 ****
  extern void _PyCodecRegistry_Init(void);
  extern void _PyCodecRegistry_Fini(void);
  
- 
  int Py_DebugFlag; /* Needed by parser.c */
  int Py_VerboseFlag; /* Needed by import.c */
  int Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */
--- 58,63 ----
***************
*** 472,477 ****
--- 473,481 ----
  {
  	PyObject *v;
  	int ret;
+ 	PySessionState sess;
+ 
+ 	sess.ss_nested_scopes = 0;
  	v = PySys_GetObject("ps1");
  	if (v == NULL) {
  		PySys_SetObject("ps1", v = PyString_FromString(">>> "));
***************
*** 483,489 ****
  		Py_XDECREF(v);
  	}
  	for (;;) {
! 		ret = PyRun_InteractiveOne(fp, filename);
  #ifdef Py_REF_DEBUG
  		fprintf(stderr, "[%ld refs]\n", _Py_RefTotal);
  #endif
--- 487,493 ----
  		Py_XDECREF(v);
  	}
  	for (;;) {
! 		ret = PyRun_InteractiveOne(fp, filename, &sess);
  #ifdef Py_REF_DEBUG
  		fprintf(stderr, "[%ld refs]\n", _Py_RefTotal);
  #endif
***************
*** 497,503 ****
  }
  
  int
! PyRun_InteractiveOne(FILE *fp, char *filename)
  {
  	PyObject *m, *d, *v, *w;
  	node *n;
--- 501,507 ----
  }
  
  int
! PyRun_InteractiveOne(FILE *fp, char *filename, PySessionState *sess)
  {
  	PyObject *m, *d, *v, *w;
  	node *n;
***************
*** 537,543 ****
  	if (m == NULL)
  		return -1;
  	d = PyModule_GetDict(m);
! 	v = run_node(n, filename, d, d);
  	if (v == NULL) {
  		PyErr_Print();
  		return -1;
--- 541,547 ----
  	if (m == NULL)
  		return -1;
  	d = PyModule_GetDict(m);
! 	v = run_node(n, filename, d, d, sess);
  	if (v == NULL) {
  		PyErr_Print();
  		return -1;
***************
*** 907,913 ****
  PyRun_String(char *str, int start, PyObject *globals, PyObject *locals)
  {
  	return run_err_node(PyParser_SimpleParseString(str, start),
! 			    "<string>", globals, locals);
  }
  
  PyObject *
--- 911,917 ----
  PyRun_String(char *str, int start, PyObject *globals, PyObject *locals)
  {
  	return run_err_node(PyParser_SimpleParseString(str, start),
! 			    "<string>", globals, locals, NULL);
  }
  
  PyObject *
***************
*** 924,946 ****
  	node *n = PyParser_SimpleParseFile(fp, filename, start);
  	if (closeit)
  		fclose(fp);
! 	return run_err_node(n, filename, globals, locals);
  }
  
  static PyObject *
! run_err_node(node *n, char *filename, PyObject *globals, PyObject *locals)
  {
  	if (n == NULL)
  		return  NULL;
! 	return run_node(n, filename, globals, locals);
  }
  
  static PyObject *
! run_node(node *n, char *filename, PyObject *globals, PyObject *locals)
  {
  	PyCodeObject *co;
  	PyObject *v;
! 	co = PyNode_Compile(n, filename);
  	PyNode_Free(n);
  	if (co == NULL)
  		return NULL;
--- 928,957 ----
  	node *n = PyParser_SimpleParseFile(fp, filename, start);
  	if (closeit)
  		fclose(fp);
! 	return run_err_node(n, filename, globals, locals, NULL);
  }
  
  static PyObject *
! run_err_node(node *n, char *filename, PyObject *globals, PyObject *locals,
! 	     PySessionState *sess)
  {
  	if (n == NULL)
  		return  NULL;
! 	return run_node(n, filename, globals, locals, sess);
  }
  
  static PyObject *
! run_node(node *n, char *filename, PyObject *globals, PyObject *locals,
! 	 PySessionState *sess)
  {
  	PyCodeObject *co;
  	PyObject *v;
! 	if (sess) {
! 		fprintf(stderr, "session state: %d\n",
! 			sess->ss_nested_scopes);
! 	}
! 	/* XXX pass sess->ss_nested_scopes to PyNode_Compile */
! 	co = PyNode_Compile(n, filename, sess);
  	PyNode_Free(n);
  	if (co == NULL)
  		return NULL;
***************
*** 986,992 ****
  	n = PyParser_SimpleParseString(str, start);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, filename);
  	PyNode_Free(n);
  	return (PyObject *)co;
  }
--- 997,1003 ----
  	n = PyParser_SimpleParseString(str, start);
  	if (n == NULL)
  		return NULL;
! 	co = PyNode_Compile(n, filename, NULL);
  	PyNode_Free(n);
  	return (PyObject *)co;
  }



From guido at digicool.com  Thu Mar  1 19:34:53 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 13:34:53 -0500
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
In-Reply-To: Your message of "Thu, 01 Mar 2001 13:16:03 EST."
             <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103011834.NAA16957@cj20424-a.reston1.va.home.com>

> from __future__ import nested_scopes is accepted at the interactive
> interpreter prompt but has no effect beyond the line on which it was
> entered.  You could use it with lambdas entered following a
> semicolon, I guess.
> 
> I would rather see the future statement take effect for the remained
> of the interactive interpreter session.  I have included a first-cut
> patch below that makes this possible, using an object called
> PySessionState.  (I don't like the name, but don't have a better one;
> PyCompilerFlags?)
> 
> The idea of the session state is to record information about the state
> of an interactive session that may affect compilation.  The
> state object is created in PyRun_InteractiveLoop() and passed all the
> way through to PyNode_Compile().
> 
> Does this seem a reasonable approach?  Should I include it in the
> beta?  Any name suggestions.

I'm not keen on changing the prototypes for PyNode_Compile() and
PyRun_InteractiveOne().  I suspect that folks doing funky stuff might
be calling these directly.

Would it be a great pain to add ...Ex() versions that take a session
state, and have the old versions call this with a made-up dummy
session state?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Thu Mar  1 19:40:58 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 13:40:58 -0500
Subject: [Python-Dev] Finger error -- my apologies
In-Reply-To: Your message of "Thu, 01 Mar 2001 12:47:51 EST."
             <20010301124751.B24835@thyrsus.com> 
References: <20010301124751.B24835@thyrsus.com> 
Message-ID: <200103011840.NAA17088@cj20424-a.reston1.va.home.com>

> I meant to accept this patch, but I think I rejected it instead.
> Sorry, Ping.  Resubmit, plese, if I fooed up?

There's no need to resubmit -- you should be able to reset the state
any time.  I've changed it back to None so you can try again.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From esr at thyrsus.com  Thu Mar  1 19:58:57 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Thu, 1 Mar 2001 13:58:57 -0500
Subject: [Python-Dev] Finger error -- my apologies
In-Reply-To: <200103011840.NAA17088@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 01, 2001 at 01:40:58PM -0500
References: <20010301124751.B24835@thyrsus.com> <200103011840.NAA17088@cj20424-a.reston1.va.home.com>
Message-ID: <20010301135857.D25553@thyrsus.com>

Guido van Rossum <guido at digicool.com>:
> > I meant to accept this patch, but I think I rejected it instead.
> > Sorry, Ping.  Resubmit, plese, if I fooed up?
> 
> There's no need to resubmit -- you should be able to reset the state
> any time.  I've changed it back to None so you can try again.

Done.

I also discovered that I wasn't quite the idiot I thought I had been; I
actually tripped over an odd little misfeature of Mozilla that other 
people working the patch queue should know about.

I saw "Rejected" after I thought I had clicked "Accepted" and thought
I had made both a mouse error and a thinko...

What actually happened was I clicked "Accepted" and then tried to page down
my browser.  Unfortunately the choice field was still selected -- and
guess what the last status value in the pulldown menu is, and
what the PgDn key does! :-)

Others should beware of this...
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Our society won't be truly free until "None of the Above" is always an option.



From tim.one at home.com  Thu Mar  1 20:11:14 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 1 Mar 2001 14:11:14 -0500
Subject: [Python-Dev] __credits__ and __author__ variables
In-Reply-To: <Pine.LNX.4.10.10103010909520.862-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBGJDAA.tim.one@home.com>

OTOH, seeing my name in a __credits__ blurb does nothing for my ego, it makes
me involuntarily shudder at having yet another potential source of extremely
urgent personal email from strangers who can't read <0.9 wink>.

So the question is, should __credits__nanny.py look for its file of names to
rip out via a magically named file or via cmdline argument?

or-maybe-a-gui!-ly y'rs  - tim




From Greg.Wilson at baltimore.com  Thu Mar  1 20:21:13 2001
From: Greg.Wilson at baltimore.com (Greg Wilson)
Date: Thu, 1 Mar 2001 14:21:13 -0500 
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>

I'm working on Solaris, and have configured Python using
--with-cxx=g++.  I have a library "libenf.a", which depends
on several .so's (Eric Young's libeay and a couple of others).
I can't modify the library, but I'd like to wrap it so that
our QA group can write scripts to test it.

My C module was pretty simple to put together.  However, when
I load it, Python (or someone) complains that the symbols that
I know are in "libeay.so" are missing.  It's on LD_LIBRARY_PATH,
and "nm" shows that the symbols really are there.  So:

1. Do I have to do something special to allow Python to load
   .so's that extensions depend on?  If so, what?

2. Or do I have to load the .so myself prior to loading my
   extension?  If so, how?  Explicit "dlopen()" calls at the
   top of "init" don't work (presumably because the built-in
   loading has already decided that some symbols are missing).

Instead of offering a beer for the first correct answer this
time, I promise to write it up and send it to Fred Drake for
inclusion in the 2.1 release notes :-).

Thanks
Greg



From guido at digicool.com  Thu Mar  1 21:32:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 15:32:37 -0500
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: Your message of "Thu, 01 Mar 2001 11:37:53 EST."
             <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <4.3.1.2.20010228223714.00d29430@exchange.starmedia.net>  
            <15006.31457.377477.65547@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103012032.PAA18322@cj20424-a.reston1.va.home.com>

> >>>>> "RT" == Robin Thomas <robin.thomas at starmedia.net> writes:
> 
>   RT> Using Python 2.0 on Win32. Am I the only person to be depressed
>   RT> by the following behavior now that __getitem__ does the work of
>   RT> __getslice__?

Jeremy:
> You may the only person to have tried it :-).
> 
>   RT> Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
>   >>> d = {}
>   >>> d[0:1] = 1
>   >>> d
>   {slice(0, 1, None): 1}
> 
> I think this should raise a TypeError (as you suggested later).

Me too, but it's such an unusual corner-case that I can't worry about
it too much.  The problem has to do with being backwards compatible --
we couldn't add the 3rd argument to the slice API that we wanted.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 21:58:24 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 15:58:24 -0500 (EST)
Subject: [Python-Dev] __future__ and the interactive interpreter prompt
In-Reply-To: <200103011834.NAA16957@cj20424-a.reston1.va.home.com>
References: <15006.37347.567568.94964@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103011834.NAA16957@cj20424-a.reston1.va.home.com>
Message-ID: <15006.47088.256265.467786@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  GvR> I'm not keen on changing the prototypes for PyNode_Compile()
  GvR> and PyRun_InteractiveOne().  I suspect that folks doing funky
  GvR> stuff might be calling these directly.

  GvR> Would it be a great pain to add ...Ex() versions that take a
  GvR> session state, and have the old versions call this with a
  GvR> made-up dummy session state?

Doesn't seem like a big problem.  Any other issues with the approach?

Jeremy



From guido at digicool.com  Thu Mar  1 21:46:56 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 15:46:56 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: Your message of "Thu, 01 Mar 2001 17:53:43 +0100."
             <200103011653.RAA09025@core.inf.ethz.ch> 
References: <200103011653.RAA09025@core.inf.ethz.ch> 
Message-ID: <200103012046.PAA18395@cj20424-a.reston1.va.home.com>

> is the semantic (expressed through bytecode instrs) right?

Hi Samuele,

Thanks for bringing this up.  I agree with your predictions for these
examples, and have checked them in as part of the test_scope.py test
suite.  Fortunately Jeremy's code passes the test!

The rule is really pretty simple if you look at it through the right
glasses:

    To resolve a name, search from the inside out for either a scope
    that contains a global statement for that name, or a scope that
    contains a definition for that name (or both).

Thus, on the one hand the effect of a global statement is restricted
to the current scope, excluding nested scopes:

   def f():
       global x
       def g():
           x = 1 # new local

On the other hand, a name mentioned a global hides outer definitions
of the same name, and thus has an effect on nested scopes:

    def f():
       x = 1
       def g():
           global x
           def h():
               return x # global

We shouldn't code like this, but it's good to agree on what it should
mean when encountered!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Thu Mar  1 22:05:51 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 16:05:51 -0500 (EST)
Subject: [Python-Dev] nested scopes. global: have I got it right?
In-Reply-To: <000d01c0a1ea$a1d53e60$f55821c0@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
Message-ID: <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>

> x=7
> def f():
>   global x
>   def g():
>     exec "x=3"
>     return x
>   print g()
> 
> f()
> 
> prints 3, not 7.

I've been meaning to reply to your original post on this subject,
which actually addresses two different issues -- global and exec.  The
example above will fail with a SyntaxError in the nested_scopes
future, because of exec in the presence of a free variable.  The error
message is bad, because it says that exec is illegal in g because g
contains nested scopes.  I may not get to fix that before the beta.

The reasoning about the error here is, as usual with exec, that name
binding is a static or compile-time property of the program text.  The
use of hyper-dynamic features like import * and exec are not allowed
when they may interfere with static resolution of names.

Buy that?

Jeremy



From guido at digicool.com  Thu Mar  1 22:01:52 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 16:01:52 -0500
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: Your message of "Thu, 01 Mar 2001 15:54:55 EST."
             <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103012101.QAA18516@cj20424-a.reston1.va.home.com>

(Adding python-dev, keeping python-list)

> Quoth Robin Thomas <robin.thomas at starmedia.net>:
> | Using Python 2.0 on Win32. Am I the only person to be depressed by the 
> | following behavior now that __getitem__ does the work of __getslice__?
> |
> | Python 2.0 (#8, Oct 16 2000, 17:27:58) [MSC 32 bit (Intel)] on win32
> |  >>> d = {}
> |  >>> d[0:1] = 1
> |  >>> d
> | {slice(0, 1, None): 1}
> |
> | And then, for more depression:
> |
> |  >>> d[0:1] = 2
> |  >>> d
> | {slice(0, 1, None): 1, slice(0, 1, None): 2}
> |
> | And then, for extra extra chagrin:
> |
> |  >>> print d[0:1]
> | Traceback (innermost last):
> |    File "<pyshell#11>", line 1, in ?
> |      d[0:1]
> | KeyError: slice(0, 1, None)
> 
> If it helps, you ruined my day.

Mine too. :-)

> | So, questions:
> |
> | 1) Is this behavior considered a bug by the BDFL or the community at large?

I can't speak for the community, but it smells like a bug to me.

> | If so, has a fix been conceived? Am I re-opening a long-resolved issue?

No, and no.

> | 2) If we're still open to proposed solutions, which of the following do you 
> | like:
> |
> |     a) make slices hash and cmp as their 3-tuple (start,stop,step),
> |        so that if I accidentally set a slice object as a key,
> |        I can at least re-set it or get it or del it :)

Good idea.  The SF patch manager is always open.

> |     b) have dict.__setitem__ expressly reject objects of SliceType
> |        as keys, raising your favorite in (TypeError, ValueError)

This is *also* a good idea.

> From: Donn Cave <donn at oz.net>
> 
> I think we might be able to do better.  I hacked in a quick fix
> in ceval.c that looks to me like it has the desired effect without
> closing the door to intentional slice keys (however unlikely.)
[...]
> *** Python/ceval.c.dist Thu Feb  1 14:48:12 2001
> --- Python/ceval.c      Wed Feb 28 21:52:55 2001
> ***************
> *** 3168,3173 ****
> --- 3168,3178 ----
>         /* u[v:w] = x */
>   {
>         int ilow = 0, ihigh = INT_MAX;
> +       if (u->ob_type->tp_as_mapping) {
> +               PyErr_SetString(PyExc_TypeError,
> +                       "dict object doesn't support slice assignment");
> +               return -1;
> +       }
>         if (!_PyEval_SliceIndex(v, &ilow))
>                 return -1;
>         if (!_PyEval_SliceIndex(w, &ihigh))

Alas, this isn't right.  It defeats the purpose completely: the whole
point was that you should be able to write a sequence class that
supports extended slices.  This uses __getitem__ and __setitem__, but
class instances have a nonzero tp_as_mapping pointer too!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Thu Mar  1 22:11:32 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 16:11:32 -0500
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>
Message-ID: <15006.47876.237152.882774@anthem.wooz.org>

>>>>> "GW" == Greg Wilson <Greg.Wilson at baltimore.com> writes:

    GW> I'm working on Solaris, and have configured Python using
    GW> --with-cxx=g++.  I have a library "libenf.a", which depends on
    GW> several .so's (Eric Young's libeay and a couple of others).  I
    GW> can't modify the library, but I'd like to wrap it so that our
    GW> QA group can write scripts to test it.

    GW> My C module was pretty simple to put together.  However, when
    GW> I load it, Python (or someone) complains that the symbols that
    GW> I know are in "libeay.so" are missing.  It's on
    GW> LD_LIBRARY_PATH, and "nm" shows that the symbols really are
    GW> there.  So:

    | 1. Do I have to do something special to allow Python to load
    |    .so's that extensions depend on?  If so, what?

Greg, it's been a while since I've worked on Solaris, but here's what
I remember.  This is all circa Solaris 2.5/2.6.

LD_LIBRARY_PATH only helps the linker find dynamic libraries at
compile/link time.  It's equivalent to the compiler's -L option.  It
does /not/ help the dynamic linker (ld.so) find your libraries at
run-time.  For that, you need LD_RUN_PATH or the -R option.  I'm of
the opinion that if you are specifying -L to the compiler, you should
always also specify -R, and that using -L/-R is always better than
LD_LIBRARY_PATH/LD_RUN_PATH (because the former is done by the person
doing the install and the latter is a burden imposed on all your
users).

There's an easy way to tell if your .so's are going to give you
problems.  Run `ldd mymodule.so' and see what the linker shows for the
dependencies.  If ldd can't find a dependency, it'll tell you,
otherwise, it show you the path to the dependent .so files.  If ldd
has a problem, you'll have a problem when you try to import it.

IIRC, distutils had a problem in this regard a while back, but these
days it seems to Just Work for me on Linux.  However, Linux is
slightly different in that there's a file /etc/ld.so.conf that you can
use to specify additional directories for ld.so to search at run-time,
so it can be fixed "after the fact".

    GW> Instead of offering a beer for the first correct answer this
    GW> time, I promise to write it up and send it to Fred Drake for
    GW> inclusion in the 2.1 release notes :-).

Oh no you don't!  You don't get off that easily.  See you next
week. :)

-Barry



From barry at digicool.com  Thu Mar  1 22:21:37 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 16:21:37 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
References: <200103011653.RAA09025@core.inf.ethz.ch>
	<200103012046.PAA18395@cj20424-a.reston1.va.home.com>
Message-ID: <15006.48481.807174.69908@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR>     To resolve a name, search from the inside out for either
    GvR> a scope that contains a global statement for that name, or a
    GvR> scope that contains a definition for that name (or both).

I think that's an excellent rule Guido -- hopefully it's captured
somewhere in the docs. :)  I think it yields behavior that both easily
discovered by visual code inspection and easily understood.

-Barry



From greg at cosc.canterbury.ac.nz  Thu Mar  1 22:54:45 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 02 Mar 2001 10:54:45 +1300 (NZDT)
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <15006.32624.826559.907667@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <200103012154.KAA02307@s454.cosc.canterbury.ac.nz>

Jeremy:

> My preference is that global kills the local-decl only in one scope.

I agree, because otherwise there would be no way of
*undoing* the effect of a global in an outer scope.

The way things are, I can write a function

  def f():
    x = 3
    return x

and be assured that x will always be local, no matter what
environment I move the function into. I like this property.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Thu Mar  1 23:04:22 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 23:04:22 +0100
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
In-Reply-To: <15006.47876.237152.882774@anthem.wooz.org>; from barry@digicool.com on Thu, Mar 01, 2001 at 04:11:32PM -0500
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com> <15006.47876.237152.882774@anthem.wooz.org>
Message-ID: <20010301230422.M9678@xs4all.nl>

On Thu, Mar 01, 2001 at 04:11:32PM -0500, Barry A. Warsaw wrote:

>     | 1. Do I have to do something special to allow Python to load
>     |    .so's that extensions depend on?  If so, what?

> Greg, it's been a while since I've worked on Solaris, but here's what
> I remember.  This is all circa Solaris 2.5/2.6.

It worked the same way in SunOS 4.x, I believe.

> I'm of the opinion that if you are specifying -L to the compiler, you
> should always also specify -R, and that using -L/-R is always better than
> LD_LIBRARY_PATH/LD_RUN_PATH (because the former is done by the person
> doing the install and the latter is a burden imposed on all your users).

FWIW, I concur with the entire story. In my experience it's pretty
SunOS/Solaris specific (in fact, I long wondered why one of my C books spent
so much time explaining -R/-L, even though it wasn't necessary on my
platforms of choice at that time ;) but it might also apply to other
Solaris-inspired shared-library environments (HP-UX ? AIX ? IRIX ?)

> IIRC, distutils had a problem in this regard a while back, but these
> days it seems to Just Work for me on Linux.  However, Linux is
> slightly different in that there's a file /etc/ld.so.conf that you can
> use to specify additional directories for ld.so to search at run-time,
> so it can be fixed "after the fact".

BSDI uses the same /etc/ld.so.conf mechanism. However, LD_LIBRARY_PATH does
get used on linux, BSDI and IIRC FreeBSD as well, but as a runtime
environment variable. The /etc/ld.so.conf file gets compiled into a cache of
available libraries using 'ldconf'. On FreeBSD, there is no
'/etc/ld.so.conf' file; instead, you use 'ldconfig -m <path>' to add <path>
to the current cache, and add or modify the definition of
${ldconfig_path} in /etc/rc.conf. (which is used in the bootup procedure to
create a new cache, in case the old one was f'd up.)

I imagine OpenBSD and NetBSD are based off of FreeBSD, not BSDI. (BSDI was
late in adopting ELF, and obviously based most of it on Linux, for some
reason.)

I-wonder-how-it-works-on-Windows-ly y'rs,

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From barry at digicool.com  Thu Mar  1 23:12:27 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 1 Mar 2001 17:12:27 -0500
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
References: <930BBCA4CEBBD411BE6500508BB3328F1AC0A8@nsamcanms1.ca.baltimore.com>
	<15006.47876.237152.882774@anthem.wooz.org>
	<20010301230422.M9678@xs4all.nl>
Message-ID: <15006.51531.427250.884726@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    >> Greg, it's been a while since I've worked on Solaris, but
    >> here's what I remember.  This is all circa Solaris 2.5/2.6.

    TW> It worked the same way in SunOS 4.x, I believe.

Ah, yes, I remember SunOS 4.x.  Remember SunOS 3.5 and earlier?  Or
even the Sun 1's?  :) NIST/NBS had at least one of those boxes still
rattling around when I left.  IIRC, it ran our old newserver for
years.

good-old-days-ly y'rs,
-Barry



From thomas at xs4all.net  Thu Mar  1 23:21:07 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 1 Mar 2001 23:21:07 +0100
Subject: [Python-Dev] Re: d = {}; d[0:1] = 1; d[0:1] = 2; print d[:]
In-Reply-To: <200103012101.QAA18516@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 01, 2001 at 04:01:52PM -0500
References: <15006.46879.607192.367739@w221.z064000254.bwi-md.dsl.cnc.net> <200103012101.QAA18516@cj20424-a.reston1.va.home.com>
Message-ID: <20010301232107.O9678@xs4all.nl>

On Thu, Mar 01, 2001 at 04:01:52PM -0500, Guido van Rossum wrote:
> > Quoth Robin Thomas <robin.thomas at starmedia.net>:

[ Dicts accept slice objects as keys in assignment, but not in retrieval ]

> > | 1) Is this behavior considered a bug by the BDFL or the community at large?

> I can't speak for the community, but it smells like a bug to me.

Speaking for the person who implemented the slice-fallback to sliceobjects:
yes, it's a bug, because it's an unintended consequence of the change :) The
intention was to eradicate the silly discrepancy between indexing, normal
slices and extended slices: normal indexing works through __getitem__,
sq_item and mp_subscript. Normal (two argument) slices work through
__getslice__ and sq_slice. Extended slices work through __getitem__, sq_item
and mp_subscript again.

Note, however, that though *this* particular bug is new in Python 2.0, it
wasn't actually absent in 1.5.2 either!

Python 1.5.2 (#0, Feb 20 2001, 23:57:58)  [GCC 2.95.3 20010125 (prerelease)]
on linux2
Copyright 1991-1995 Stichting Mathematisch Centrum, Amsterdam
>>> d = {}
>>> d[0:1] = "spam"
Traceback (innermost last):
  File "<stdin>", line 1, in ?
TypeError: object doesn't support slice assignment
>>> d[0:1:1] = "spam"
>>> d[0:1:] = "spam"
>>> d
{slice(0, 1, None): 'spam', slice(0, 1, 1): 'spam'}

The bug is just extended to cover normal slices as well, because the absense
of sq_slice now causes Python to fall back to normal item setting/retrieval.

I think making slices hashable objects makes the most sense. They can just
be treated as a three-tuple of the values in the slice, or some such.
Falling back to just sq_item/__getitem__ and not mp_subscript might make
some sense, but it seems a bit of an artificial split, since classes that
pretend to be mappings would be treated differently than types that pretend
to be mappings.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim.one at home.com  Thu Mar  1 23:37:35 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 1 Mar 2001 17:37:35 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: <15006.48481.807174.69908@anthem.wooz.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com>

> >>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:
>
>     GvR>     To resolve a name, search from the inside out for either
>     GvR> a scope that contains a global statement for that name, or a
>     GvR> scope that contains a definition for that name (or both).
>
[Barry A. Warsaw]
> I think that's an excellent rule Guido --

Hmm.  After an hour of consideration, I would agree, provided only that the
rule also say you *stop* upon finding the first one <wink>.

> hopefully it's captured somewhere in the docs. :)

The python-dev archives are incorporated into the docs by implicit reference.

you-found-it-you-fix-it-ly y'rs  - tim




From martin at loewis.home.cs.tu-berlin.de  Thu Mar  1 23:39:01 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 1 Mar 2001 23:39:01 +0100
Subject: [Python-Dev] Extensions that depend on .so's on Solaris
Message-ID: <200103012239.f21Md1i01641@mira.informatik.hu-berlin.de>

> I have a library "libenf.a", which depends on several .so's (Eric
> Young's libeay and a couple of others).

> My C module was pretty simple to put together.  However, when I load
> it, Python (or someone) complains that the symbols that I know are
> in "libeay.so" are missing.

If it says that the symbols are missing, it is *not* a problem of
LD_LIBRARY_PATH, LD_RUN_PATH (I can't find documentation or mentioning
of that variable anywhere...), or the -R option.

Instead, the most likely cause is that you forgot to link the .so when
linking the extension module. I.e. you should do

gcc -o foomodule.so foomodule.o -lenf -leay

If you omit the -leay, you get a shared object which will report
missing symbols when being loaded, except when the shared library was
loaded already for some other reason.

If you *did* specify -leay, it still might be that the symbols are not
available in the shared library. You said that nm displayed them, but
will nm still display them after you applied strip(1) to the library?
To see the symbols found by ld.so.1, you need to use the -D option of
nm(1).

Regards,
Martin



From jeremy at alum.mit.edu  Fri Mar  2 00:34:44 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 18:34:44 -0500 (EST)
Subject: [Python-Dev] nested scopes and future status
Message-ID: <15006.56468.16421.206413@w221.z064000254.bwi-md.dsl.cnc.net>

There are several loose ends in the nested scopes changes that I won't
have time to fix before the beta.  Here's a laundry list of tasks that
remain.  I don't think any of these is crucial for the release.
Holler if there's something you'd like me to fix tonight.

- Integrate the parsing of future statements into the _symtable
  module's interface.  This interface is new with 2.1 and
  undocumented, so it's deficiency here will not affect any code.

- Update traceback.py to understand SyntaxErrors that have a text
  attribute and an offset of None.  It should not print the caret.

- PyErr_ProgramText() should be called when an exception is printed
  rather than when it is raised.

- Fix pdb to support nested scopes.

- Produce a better error message/warning for code like this:
  def f(x):
      def g():
          exec ...
          print x
  The warning message should say that exec is not allowed in a nested
  function with free variables.  It currently says that g *contains* a
  nested function with free variables.

- Update the documentation.

Jeremy



From pedroni at inf.ethz.ch  Fri Mar  2 00:22:20 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Fri, 2 Mar 2001 00:22:20 +0100
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net><LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com><15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net><000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <004101c0a2a6$781cd440$f979fea9@newmexico>

Hi.


> > x=7
> > def f():
> >   global x
> >   def g():
> >     exec "x=3"
> >     return x
> >   print g()
> > 
> > f()
> > 
> > prints 3, not 7.
> 
> I've been meaning to reply to your original post on this subject,
> which actually addresses two different issues -- global and exec.  The
> example above will fail with a SyntaxError in the nested_scopes
> future, because of exec in the presence of a free variable.  The error
> message is bad, because it says that exec is illegal in g because g
> contains nested scopes.  I may not get to fix that before the beta.
> 
> The reasoning about the error here is, as usual with exec, that name
> binding is a static or compile-time property of the program text.  The
> use of hyper-dynamic features like import * and exec are not allowed
> when they may interfere with static resolution of names.
> 
> Buy that?
Yes I buy that. (I had tried it with the old a2)
So also this code will raise an error or I'm not understanding the point
and the error happens because of the global decl?

# top-level
def g():
  exec "x=3"
  return x

For me is ok, but that kills many naive uses of exec, I'm wondering if it 
does not make more sense to directly take the next big step and issue
an error (under future nested_scopes) for *all* the uses of exec without in.
Because every use of a builtin will cause the error...

regards




From jeremy at alum.mit.edu  Fri Mar  2 00:22:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 18:22:28 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <004101c0a2a6$781cd440$f979fea9@newmexico>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
	<15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
	<004101c0a2a6$781cd440$f979fea9@newmexico>
Message-ID: <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "SP" == Samuele Pedroni <pedroni at inf.ethz.ch> writes:

  SP> # top-level
  SP> def g():
  SP>   exec "x=3" 
  SP>   return x

At the top-level, there is no closure created by the enclosing scope
is not a function scope.  I believe that's the right thing to do,
except that the exec "x=3" also assign to the global.

I'm not sure if there is a strong justification for allowing this
form, except that it is the version of exec that is most likely to
occur in legacy code.

Jeremy



From guido at digicool.com  Fri Mar  2 03:17:38 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:17:38 -0500
Subject: [Python-Dev] just trying to catch up with the semantic
In-Reply-To: Your message of "Thu, 01 Mar 2001 17:37:35 EST."
             <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCMECKJDAA.tim.one@home.com> 
Message-ID: <200103020217.VAA19891@cj20424-a.reston1.va.home.com>

> > >>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:
> >
> >     GvR>     To resolve a name, search from the inside out for either
> >     GvR> a scope that contains a global statement for that name, or a
> >     GvR> scope that contains a definition for that name (or both).
> >
> [Barry A. Warsaw]
> > I think that's an excellent rule Guido --
> 
> Hmm.  After an hour of consideration,

That's quick -- it took me longer than that to come to the conclusion
that Jeremy had actually done the right thing. :-)

> I would agree, provided only that the
> rule also say you *stop* upon finding the first one <wink>.
> 
> > hopefully it's captured somewhere in the docs. :)
> 
> The python-dev archives are incorporated into the docs by implicit reference.
> 
> you-found-it-you-fix-it-ly y'rs  - tim

I'm sure the docs can stand some updates after the 2.1b1 crunch is
over to document what all we did.  After the conference!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar  2 03:35:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:35:01 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 18:22:28 EST."
             <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico>  
            <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103020235.VAA22273@cj20424-a.reston1.va.home.com>

> >>>>> "SP" == Samuele Pedroni <pedroni at inf.ethz.ch> writes:
> 
>   SP> # top-level
>   SP> def g():
>   SP>   exec "x=3" 
>   SP>   return x
> 
> At the top-level, there is no closure created by the enclosing scope
> is not a function scope.  I believe that's the right thing to do,
> except that the exec "x=3" also assign to the global.
> 
> I'm not sure if there is a strong justification for allowing this
> form, except that it is the version of exec that is most likely to
> occur in legacy code.

Unfortunately this used to work, using a gross hack: when an exec (or
import *) was present inside a function, the namespace semantics *for
that function* was changed to the pre-0.9.1 semantics, where all names
are looked up *at run time* first in the locals then in the globals
and then in the builtins.

I don't know how common this is -- it's pretty fragile.  If there's a
great clamor, we can put this behavior back after b1 is released.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar  2 03:43:34 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 21:43:34 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 21:35:01 EST."
             <200103020235.VAA22273@cj20424-a.reston1.va.home.com> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>  
            <200103020235.VAA22273@cj20424-a.reston1.va.home.com> 
Message-ID: <200103020243.VAA24384@cj20424-a.reston1.va.home.com>

> >   SP> # top-level
> >   SP> def g():
> >   SP>   exec "x=3" 
> >   SP>   return x

[me]
> Unfortunately this used to work, using a gross hack: when an exec (or
> import *) was present inside a function, the namespace semantics *for
> that function* was changed to the pre-0.9.1 semantics, where all names
> are looked up *at run time* first in the locals then in the globals
> and then in the builtins.
> 
> I don't know how common this is -- it's pretty fragile.  If there's a
> great clamor, we can put this behavior back after b1 is released.

I spoke too soon.  It just works in the latest 2.1b1.  Or am I missing
something?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From ping at lfw.org  Fri Mar  2 03:50:41 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Thu, 1 Mar 2001 18:50:41 -0800 (PST)
Subject: [Python-Dev] Re: Is outlawing-nested-import-* only an implementation issue?
In-Reply-To: <14998.33979.566557.956297@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <Pine.LNX.4.10.10102241727410.13155-100000@localhost>

On Fri, 23 Feb 2001, Jeremy Hylton wrote:
> I think the meaning of print x should be statically determined.  That
> is, the programmer should be able to determine the binding environment
> in which x will be resolved (for print x) by inspection of the code.

I haven't had time in a while to follow up on this thread, but i just
wanted to say that i think this is a reasonable and sane course of
action.  I see the flaws in the model i was advocating, and i'm sorry
for consuming all that time in the discussion.


-- ?!ng


Post Scriptum:

On Fri, 23 Feb 2001, Jeremy Hylton wrote:
>   KPY> I tried STk Scheme, guile, and elisp, and they all do this.
> 
> I guess I'm just dense then.  Can you show me an example?

The example is pretty much exactly what you wrote:

    (define (f)
        (eval '(define y 2))
        y)

It produced 2.

But several sources have confirmed that this is just bad implementation
behaviour, so i'm willing to consider that a red herring.  Believe it
or not, in some Schemes, the following actually happens!

            STk> (define x 1)
            x
            STk> (define (func flag)
                     (if flag (define x 2))
                     (lambda () (set! x 3)))
            func
            STk> ((func #t))
            STk> x
            1
            STk> ((func #f))
            STk> x
            3

More than one professor that i showed the above to screamed.





From jeremy at alum.mit.edu  Fri Mar  2 02:12:37 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 20:12:37 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <200103020243.VAA24384@cj20424-a.reston1.va.home.com>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
	<15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
	<004101c0a2a6$781cd440$f979fea9@newmexico>
	<15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103020235.VAA22273@cj20424-a.reston1.va.home.com>
	<200103020243.VAA24384@cj20424-a.reston1.va.home.com>
Message-ID: <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  >> >   SP> # top-level
  >> >   SP> def g():
  >> >   SP>   exec "x=3" return x

  GvR> [me]
  >> Unfortunately this used to work, using a gross hack: when an exec
  >> (or import *) was present inside a function, the namespace
  >> semantics *for that function* was changed to the pre-0.9.1
  >> semantics, where all names are looked up *at run time* first in
  >> the locals then in the globals and then in the builtins.
  >>
  >> I don't know how common this is -- it's pretty fragile.  If
  >> there's a great clamor, we can put this behavior back after b1 is
  >> released.

  GvR> I spoke too soon.  It just works in the latest 2.1b1.  Or am I
  GvR> missing something?

The nested scopes rules don't kick in until you've got one function
nested in another.  The top-level namespace is treated differently
that other function namespaces.  If a function is defined at the
top-level then all its free variables are globals.  As a result, the
old rules still apply.

Since class scopes are ignored for nesting, methods defined in
top-level classes are handled the same way.

I'm not completely sure this makes sense, although it limits code
breakage; most functions are defined at the top-level or in classes!
I think it is fairly clear, though.

Jeremy



From guido at digicool.com  Fri Mar  2 04:04:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 22:04:19 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 20:12:37 EST."
             <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> <200103020235.VAA22273@cj20424-a.reston1.va.home.com> <200103020243.VAA24384@cj20424-a.reston1.va.home.com>  
            <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103020304.WAA24620@cj20424-a.reston1.va.home.com>

>   >> >   SP> # top-level
>   >> >   SP> def g():
>   >> >   SP>   exec "x=3" return x
> 
>   GvR> [me]
>   >> Unfortunately this used to work, using a gross hack: when an exec
>   >> (or import *) was present inside a function, the namespace
>   >> semantics *for that function* was changed to the pre-0.9.1
>   >> semantics, where all names are looked up *at run time* first in
>   >> the locals then in the globals and then in the builtins.
>   >>
>   >> I don't know how common this is -- it's pretty fragile.  If
>   >> there's a great clamor, we can put this behavior back after b1 is
>   >> released.
> 
>   GvR> I spoke too soon.  It just works in the latest 2.1b1.  Or am I
>   GvR> missing something?
> 
> The nested scopes rules don't kick in until you've got one function
> nested in another.  The top-level namespace is treated differently
> that other function namespaces.  If a function is defined at the
> top-level then all its free variables are globals.  As a result, the
> old rules still apply.

This doesn't make sense.  If the free variables were truely considered
globals, the reference to x would raise a NameError, because the exec
doesn't define it at the global level -- it defines it at the local
level.  So apparently you are generating LOAD_NAME instead of
LOAD_GLOBAL for free variables in toplevel functions.  Oh well, this
does the job!

> Since class scopes are ignored for nesting, methods defined in
> top-level classes are handled the same way.
> 
> I'm not completely sure this makes sense, although it limits code
> breakage; most functions are defined at the top-level or in classes!
> I think it is fairly clear, though.

Yeah, it's pretty unlikely that there will be much code breakage of
this form:

def f():
    def g():
        exec "x = 1"
        return x

(Hm, trying this I see that it generates a warning, but with the wrong
filename.  I'll see if I can use symtable_warn() here.)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Fri Mar  2 02:31:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Thu, 1 Mar 2001 20:31:28 -0500 (EST)
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: <200103020304.WAA24620@cj20424-a.reston1.va.home.com>
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net>
	<LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com>
	<15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net>
	<000d01c0a1ea$a1d53e60$f55821c0@newmexico>
	<15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net>
	<004101c0a2a6$781cd440$f979fea9@newmexico>
	<15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103020235.VAA22273@cj20424-a.reston1.va.home.com>
	<200103020243.VAA24384@cj20424-a.reston1.va.home.com>
	<15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103020304.WAA24620@cj20424-a.reston1.va.home.com>
Message-ID: <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  >> The nested scopes rules don't kick in until you've got one
  >> function nested in another.  The top-level namespace is treated
  >> differently that other function namespaces.  If a function is
  >> defined at the top-level then all its free variables are globals.
  >> As a result, the old rules still apply.

  GvR> This doesn't make sense.  If the free variables were truely
  GvR> considered globals, the reference to x would raise a NameError,
  GvR> because the exec doesn't define it at the global level -- it
  GvR> defines it at the local level.  So apparently you are
  GvR> generating LOAD_NAME instead of LOAD_GLOBAL for free variables
  GvR> in toplevel functions.  Oh well, this does the job!

Actually, I only generate LOAD_NAME for unoptimized, top-level
function namespaces.  These are exactly the old rules and I avoided
changing them for top-level functions, except when they contained a
nested function.

If we eliminate exec without "in," this is yet another problem that
goes away.

Jeremy



From guido at digicool.com  Fri Mar  2 05:07:16 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 01 Mar 2001 23:07:16 -0500
Subject: [Python-Dev] violently deprecating exec without in (was: nested scopes. global: have I got it right?)
In-Reply-To: Your message of "Thu, 01 Mar 2001 20:31:28 EST."
             <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <15005.32325.816795.62903@w221.z064000254.bwi-md.dsl.cnc.net> <LNBBLJKPBEHFEDALKOLCAENMJCAA.tim.one@home.com> <15005.32815.255120.318709@w221.z064000254.bwi-md.dsl.cnc.net> <000d01c0a1ea$a1d53e60$f55821c0@newmexico> <15006.47535.56661.697207@w221.z064000254.bwi-md.dsl.cnc.net> <004101c0a2a6$781cd440$f979fea9@newmexico> <15006.55732.44786.466044@w221.z064000254.bwi-md.dsl.cnc.net> <200103020235.VAA22273@cj20424-a.reston1.va.home.com> <200103020243.VAA24384@cj20424-a.reston1.va.home.com> <15006.62341.338359.803041@w221.z064000254.bwi-md.dsl.cnc.net> <200103020304.WAA24620@cj20424-a.reston1.va.home.com>  
            <15006.63472.628208.875808@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103020407.XAA30061@cj20424-a.reston1.va.home.com>

[Jeremy]
>   >> The nested scopes rules don't kick in until you've got one
>   >> function nested in another.  The top-level namespace is treated
>   >> differently that other function namespaces.  If a function is
>   >> defined at the top-level then all its free variables are globals.
>   >> As a result, the old rules still apply.
> 
>   GvR> This doesn't make sense.  If the free variables were truely
>   GvR> considered globals, the reference to x would raise a NameError,
>   GvR> because the exec doesn't define it at the global level -- it
>   GvR> defines it at the local level.  So apparently you are
>   GvR> generating LOAD_NAME instead of LOAD_GLOBAL for free variables
>   GvR> in toplevel functions.  Oh well, this does the job!

[Jeremy]
> Actually, I only generate LOAD_NAME for unoptimized, top-level
> function namespaces.  These are exactly the old rules and I avoided
> changing them for top-level functions, except when they contained a
> nested function.

Aha.

> If we eliminate exec without "in," this is yet another problem that
> goes away.

But that's for another release...  That will probably get a lot of
resistance from some category of users!

So it's fine for now.  Thanks, Jeremy!  Great job!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at effbot.org  Fri Mar  2 09:35:59 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Fri, 2 Mar 2001 09:35:59 +0100
Subject: [Python-Dev] a small C style question
Message-ID: <05f101c0a2f3$cf4bae10$e46940d5@hagrid>

DEC's OpenVMS compiler are a bit pickier than most other compilers.
among other things, it correctly notices that the "code" variable in
this statement is an unsigned variable:

    UNICODEDATA:

        if (code < 0 || code >= 65536)
    ........^
    %CC-I-QUESTCOMPARE, In this statement, the unsigned 
    expression "code" is being compared with a relational
    operator to a constant whose value is not greater than
    zero.  This might not be what you intended.
    at line number 285 in file UNICODEDATA.C

the easiest solution would of course be to remove the "code < 0"
part, but code is a Py_UCS4 variable.  what if someone some day
changes Py_UCS4 to a 64-bit signed integer, for example?

what's the preferred style?

1) leave it as is, and let OpenVMS folks live with the
compiler complaint

2) get rid of "code < 0" and hope that nobody messes
up the Py_UCS4 declaration

3) cast "code" to a known unsigned type, e.g:

        if ((unsigned int) code >= 65536)

Cheers /F




From mwh21 at cam.ac.uk  Fri Mar  2 13:58:49 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Fri, 2 Mar 2001 12:58:49 +0000 (GMT)
Subject: [Python-Dev] python-dev summary, 2001-02-15 - 2001-03-01
Message-ID: <Pine.LNX.4.10.10103021255240.18596-100000@localhost.localdomain>

Thanks for all the positive feedback for the last summary!

 This is a summary of traffic on the python-dev mailing list between
 Feb 15 and Feb 28 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list at python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the second python-dev summary written by Michael Hudson.
 Previous summaries were written by Andrew Kuchling and can be found
 at:

   <http://www.amk.ca/python/dev/>

 New summaries will appear at:

  <http://starship.python.net/crew/mwh/summaries/>

 and will continue to be archived at Andrew's site.

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 400

       |                         ]|[                            
       |                         ]|[                            
    60 |                         ]|[                            
       |                         ]|[                            
       |                         ]|[                            
       |                         ]|[                     ]|[    
       |                         ]|[     ]|[             ]|[    
       |                         ]|[     ]|[             ]|[    
    40 |                         ]|[     ]|[             ]|[ ]|[
       |                         ]|[     ]|[             ]|[ ]|[
       |     ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
    20 | ]|[ ]|[                 ]|[ ]|[ ]|[             ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[         ]|[ ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[         ]|[ ]|[ ]|[
       | ]|[ ]|[             ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
       | ]|[ ]|[     ]|[     ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
       | ]|[ ]|[     ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[
     0 +-033-037-002-008-006-021-071-037-051-012-002-021-054-045
        Thu 15| Sat 17| Mon 19| Wed 21| Fri 23| Sun 25| Tue 27|
            Fri 16  Sun 18  Tue 20  Thu 22  Sat 24  Mon 26  Wed 28

 A slightly quieter week on python-dev.  As you can see, most Python
 developers are too well-adjusted to post much on weekends.  Or
 Mondays.

 There was a lot of traffic on the bugs, patches and checkins lists in
 preparation for the upcoming 2.1b1 release.


    * backwards incompatibility *

 Most of the posts in the large spike in the middle of the posting
 distribution were on the subject of backward compatibility.  On of
 the unexpected (by those of us that hadn't thought too hard about it)
 consequences of nested scopes was that some code using the dreaded
 "from-module-import-*" code inside functions became ambiguous, and
 the plan was to ban such code in Python 2.1.  This provoked a storm
 of protest from many quarters, including python-dev and
 comp.lang.python.  If you really want to read all of this, start
 here:

  <http://mail.python.org/pipermail/python-dev/2001-February/013003.html>

 However, as you will know if you read comp.lang.python, PythonLabs
 took note, and in:

  <http://mail.python.org/pipermail/python-dev/2001-February/013125.html>
 
 Guido announced that the new nested scopes behaviour would be opt-in
 in 2.1, but code that will break in python 2.2 when nested scopes
 become the default will produce a warning in 2.1.  To get the new
 behaviour in a module, one will need to put

    from __future__ import nested_scopes

 at the top of the module.  It is possible this gimmick will be used
 to introduce further backwards compatible features in the future.


    * obmalloc *

 After some more discussion, including Neil Schemenauer pointing out
 that obmalloc might enable him to make the cycle GC faster, obmalloc
 was finally checked in.

 There's a second patch from Vladimir Marangoz implementing a memory
 profiler.  (sorry for the long line)

  <http://sourceforge.net/tracker/index.php?func=detail&aid=401229&group_id=5470&atid=305470>

 Opinion was muted about this; as Neil summed up in:

  <http://mail.python.org/pipermail/python-dev/2001-February/013205.html>

 noone cares enough to put the time into it and review this patch.
 Sufficiently violently wielded opnions may swing the day...


    * pydoc *

 Ka-Ping Yee checked in his amazing pydoc.  pydoc was described in

  <http://mail.python.org/pipermail/python-dev/2001-January/011538.html>

 It gives command line and web browser access to Python's
 documentation, and will be installed as a separate script in 2.1.


    * other stuff *

 It is believed that the case-sensitive import issues mentioned in the
 last summary have been sorted out, although it will be hard to be
 sure until the beta.

 The unit-test discussion petered out.  Nothing has been checked in
 yet.

 The iteraators discussion seems to have disappeared.  At least, your
 author can't find it!

Cheers,
M.




From guido at digicool.com  Fri Mar  2 15:22:27 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 09:22:27 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
Message-ID: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>

I was tickled when I found a quote from Tim Berners-Lee about Python
here: http://www.w3.org/2000/10/swap/#L88

Most quotable part: "Python is a language you can get into on one
battery!"

We should be able to use that for PR somewhere...

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Fri Mar  2 15:32:01 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 02 Mar 2001 14:32:01 +0000
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: "A.M. Kuchling"'s message of "Wed, 28 Feb 2001 12:55:12 -0800"
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk>

"A.M. Kuchling" <akuchling at users.sourceforge.net> writes:

> --- NEW FILE: pydoc ---
> #!/usr/bin/env python
> 

Could I make a request that this gets munged to point to the python
that's being installed at build time?  I've just built from CVS,
installed in /usr/local, and:

$ pydoc -g
Traceback (most recent call last):
  File "/usr/local/bin/pydoc", line 3, in ?
    import pydoc
ImportError: No module named pydoc

because the /usr/bin/env python thing hits the older python in /usr
first.

Don't bother if this is actually difficult.

Cheers,
M.




From guido at digicool.com  Fri Mar  2 15:34:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 09:34:37 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: Your message of "02 Mar 2001 14:32:01 GMT."
             <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> 
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net>  
            <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>

> "A.M. Kuchling" <akuchling at users.sourceforge.net> writes:
> 
> > --- NEW FILE: pydoc ---
> > #!/usr/bin/env python
> > 
> 
> Could I make a request that this gets munged to point to the python
> that's being installed at build time?  I've just built from CVS,
> installed in /usr/local, and:
> 
> $ pydoc -g
> Traceback (most recent call last):
>   File "/usr/local/bin/pydoc", line 3, in ?
>     import pydoc
> ImportError: No module named pydoc
> 
> because the /usr/bin/env python thing hits the older python in /usr
> first.
> 
> Don't bother if this is actually difficult.

This could become a standard distutils feature!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From akuchlin at mems-exchange.org  Fri Mar  2 15:56:17 2001
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 2 Mar 2001 09:56:17 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:34:37AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com>
Message-ID: <20010302095617.A11182@ute.cnri.reston.va.us>

On Fri, Mar 02, 2001 at 09:34:37AM -0500, Guido van Rossum wrote:
>> because the /usr/bin/env python thing hits the older python in /usr
>> first.
>> Don't bother if this is actually difficult.
>
>This could become a standard distutils feature!

It already does this for regular distributions (see build_scripts.py),
but running with a newly built Python causes problems; it uses
sys.executable, which results in '#!python' at build time.  I'm not
sure how to fix this; perhaps the Makefile should always set a
BUILDING_PYTHON environment variable, and the Distutils could check
for its being set.  

--amk




From nas at arctrix.com  Fri Mar  2 16:03:00 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 2 Mar 2001 07:03:00 -0800
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302095617.A11182@ute.cnri.reston.va.us>; from akuchlin@mems-exchange.org on Fri, Mar 02, 2001 at 09:56:17AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302095617.A11182@ute.cnri.reston.va.us>
Message-ID: <20010302070300.B11722@glacier.fnational.com>

On Fri, Mar 02, 2001 at 09:56:17AM -0500, Andrew Kuchling wrote:
> It already does this for regular distributions (see build_scripts.py),
> but running with a newly built Python causes problems; it uses
> sys.executable, which results in '#!python' at build time.  I'm not
> sure how to fix this; perhaps the Makefile should always set a
> BUILDING_PYTHON environment variable, and the Distutils could check
> for its being set.  

setup.py fix this by assigning sys.executable to $(prefix)/bin/python
before installing.  I don't know if that would break anything
else though.

  Neil



From DavidA at ActiveState.com  Fri Mar  2 02:05:59 2001
From: DavidA at ActiveState.com (David Ascher)
Date: Thu, 1 Mar 2001 17:05:59 -0800
Subject: [Python-Dev] Finally, a Python Cookbook!
Message-ID: <PLEJJNOHDIGGLDPOGPJJOEOKCNAA.DavidA@ActiveState.com>

Hello all --

ActiveState is now hosting a site
(http://www.ActiveState.com/PythonCookbook) that will be the beginning of a
series of community-based language-specific cookbooks to be jointly
sponsored by ActiveState and O'Reilly.

The first in the series is the "Python Cookbook".  We will be announcing
this effort at the Python Conference, but wanted to give you a sneak peek at
it ahead of time.

The idea behind it is for it to be a managed open collaborative repository
of Python recipes that implements RCD (rapid content development) for a
cookbook that O'Reilly will eventually publish. The Python Cookbook will be
freely available for review and use by all. It will also be different than
any other project of its kind in one very important way. This will be a
community effort. A book written by the Python community and delivered to
the Python Community, as a handy reference and invaluable aid for those
still to join. The partnership of ActiveState and O?Reilly provide the
framework, the organization, and the resources necessary to help bring this
book to life.

If you've got the time, please dig in your code base for recipes which you
may have developed and consider contributing them.  That way, you'll help us
'seed' the cookbook for its launch at the 9th Python Conference on March
5th!

Whether you have the time to contribute or not, we'd appreciate it if you
registered, browsed the site and gave us feedback at
pythoncookbook at ActiveState.com.

We want to make sure that this site reflects the community's needs, so all
feedback is welcome.

Thanks in advance for all your efforts in making this a successful endeavor.

Thanks,

David Ascher & the Cookbook team
ActiveState - Perl Python Tcl XSLT - Programming for the People

Vote for Your Favorite Perl & Python Programming
Accomplishments in the first Active Awards!
>>http://www.ActiveState.com/Awards  <http://www.activestate.com/awards><<




From gward at cnri.reston.va.us  Fri Mar  2 17:10:53 2001
From: gward at cnri.reston.va.us (Greg Ward)
Date: Fri, 2 Mar 2001 11:10:53 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <200103021434.JAA06630@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:34:37AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com>
Message-ID: <20010302111052.A14221@thrak.cnri.reston.va.us>

On 02 March 2001, Guido van Rossum said:
> This could become a standard distutils feature!

It is -- if a script is listed in 'scripts' in setup.py, and it's a Python
script, its #! line is automatically munged to point to the python that's
running the setup script.

Hmmm, this could be a problem if that python hasn't been installed itself
yet.  IIRC, it just trusts sys.executable.

        Greg



From tim.one at home.com  Fri Mar  2 17:27:43 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 2 Mar 2001 11:27:43 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com>

[Guido]
> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88
>
> Most quotable part: "Python is a language you can get into on one
> battery!"

Most baffling part:  "One day, 15 minutes before I had to leave for the
airport, I got my laptop back out of my bag, and sucked off the web the
python 1.6 system ...".  What about python.org steered people toward 1.6?  Of
course, Tim *is* a Tim, and they're not always rational ...





From guido at digicool.com  Fri Mar  2 17:28:59 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 11:28:59 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of "Fri, 02 Mar 2001 11:27:43 EST."
             <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCGEGKJDAA.tim.one@home.com> 
Message-ID: <200103021628.LAA07147@cj20424-a.reston1.va.home.com>

> [Guido]
> > I was tickled when I found a quote from Tim Berners-Lee about Python
> > here: http://www.w3.org/2000/10/swap/#L88
> >
> > Most quotable part: "Python is a language you can get into on one
> > battery!"
> 
> Most baffling part:  "One day, 15 minutes before I had to leave for the
> airport, I got my laptop back out of my bag, and sucked off the web the
> python 1.6 system ...".  What about python.org steered people toward 1.6?  Of
> course, Tim *is* a Tim, and they're not always rational ...

My guess is this was before 2.0 final was released.  I don't blame
him.  And after all, he's a Tim -- he can do what he wants to! :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas.heller at ion-tof.com  Fri Mar  2 17:38:04 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 2 Mar 2001 17:38:04 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us>
Message-ID: <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>

Gred Ward, who suddenly reappears:
> On 02 March 2001, Guido van Rossum said:
> > This could become a standard distutils feature!
> 
> It is -- if a script is listed in 'scripts' in setup.py, and it's a Python
> script, its #! line is automatically munged to point to the python that's
> running the setup script.
> 
What about this code in build_scripts.py?

  # check if Python is called on the first line with this expression.
  # This expression will leave lines using /usr/bin/env alone; presumably
  # the script author knew what they were doing...)
  first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')

Doesn't this mean that
#!/usr/bin/env python
lines are NOT fixed?

Thomas




From gward at python.net  Fri Mar  2 17:41:24 2001
From: gward at python.net (Greg Ward)
Date: Fri, 2 Mar 2001 11:41:24 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302070300.B11722@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 02, 2001 at 07:03:00AM -0800
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302095617.A11182@ute.cnri.reston.va.us> <20010302070300.B11722@glacier.fnational.com>
Message-ID: <20010302114124.A2826@cthulhu.gerg.ca>

On 02 March 2001, Neil Schemenauer said:
> setup.py fix this by assigning sys.executable to $(prefix)/bin/python
> before installing.  I don't know if that would break anything
> else though.

That *should* work.  Don't think Distutils relies on
"os.path.exists(sys.executable)" anywhere....

...oops, may have spoken too soon: the byte-compilation code (in
distutils/util.py) spawns sys.executable.  So if byte-compilation is
done in the same run as installing scripts, you lose.  Fooey.

        Greg
-- 
Greg Ward - just another /P(erl|ython)/ hacker          gward at python.net
http://starship.python.net/~gward/
Heisenberg may have slept here.



From gward at python.net  Fri Mar  2 17:47:39 2001
From: gward at python.net (Greg Ward)
Date: Fri, 2 Mar 2001 11:47:39 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>; from thomas.heller@ion-tof.com on Fri, Mar 02, 2001 at 05:38:04PM +0100
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook>
Message-ID: <20010302114739.B2826@cthulhu.gerg.ca>

On 02 March 2001, Thomas Heller said:
> Gred Ward, who suddenly reappears:

"He's not dead, he's just resting!"

> What about this code in build_scripts.py?
> 
>   # check if Python is called on the first line with this expression.
>   # This expression will leave lines using /usr/bin/env alone; presumably
>   # the script author knew what they were doing...)
>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')

Hmm, that's a recent change:

  revision 1.7
  date: 2001/02/28 20:59:33;  author: akuchling;  state: Exp;  lines: +5 -3
  Leave #! lines featuring /usr/bin/env alone

> Doesn't this mean that
> #!/usr/bin/env python
> lines are NOT fixed?

Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
lines is the right thing to do?  I happen to think it's not; I think #!
lines should always be munged (assuming this is a Python script, of
course).

        Greg
-- 
Greg Ward - nerd                                        gward at python.net
http://starship.python.net/~gward/
Disclaimer: All rights reserved. Void where prohibited. Limit 1 per customer.



From akuchlin at mems-exchange.org  Fri Mar  2 17:54:59 2001
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Fri, 2 Mar 2001 11:54:59 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: <20010302114739.B2826@cthulhu.gerg.ca>; from gward@python.net on Fri, Mar 02, 2001 at 11:47:39AM -0500
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook> <20010302114739.B2826@cthulhu.gerg.ca>
Message-ID: <20010302115459.A3029@ute.cnri.reston.va.us>

On Fri, Mar 02, 2001 at 11:47:39AM -0500, Greg Ward wrote:
>>   # check if Python is called on the first line with this expression.
>>   # This expression will leave lines using /usr/bin/env alone; presumably
>>   # the script author knew what they were doing...)
>>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')
>
>Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
>lines is the right thing to do?  I happen to think it's not; I think #!
>lines should always be munged (assuming this is a Python script, of
>course).

Disagree; as the comment says, "presumably the script author knew what
they were doing..." when they put /usr/bin/env at the top.  This had
to be done so that pydoc could be installed at all.

--amk



From guido at digicool.com  Fri Mar  2 18:01:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 12:01:50 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Tools/scripts pydoc,NONE,1.1
In-Reply-To: Your message of "Fri, 02 Mar 2001 11:54:59 EST."
             <20010302115459.A3029@ute.cnri.reston.va.us> 
References: <E14YDcu-0005FG-00@usw-pr-cvs1.sourceforge.net> <m3g0gwckda.fsf@atrus.jesus.cam.ac.uk> <200103021434.JAA06630@cj20424-a.reston1.va.home.com> <20010302111052.A14221@thrak.cnri.reston.va.us> <02b501c0a337$282a8d60$e000a8c0@thomasnotebook> <20010302114739.B2826@cthulhu.gerg.ca>  
            <20010302115459.A3029@ute.cnri.reston.va.us> 
Message-ID: <200103021701.MAA07349@cj20424-a.reston1.va.home.com>

> >>   # check if Python is called on the first line with this expression.
> >>   # This expression will leave lines using /usr/bin/env alone; presumably
> >>   # the script author knew what they were doing...)
> >>   first_line_re = re.compile(r'^#!(?!\s*/usr/bin/env\b).*python(\s+.*)?')
> >
> >Yup.  Andrew, care to explain why not munging "#!/usr/bin/env python"
> >lines is the right thing to do?  I happen to think it's not; I think #!
> >lines should always be munged (assuming this is a Python script, of
> >course).
> 
> Disagree; as the comment says, "presumably the script author knew what
> they were doing..." when they put /usr/bin/env at the top.  This had
> to be done so that pydoc could be installed at all.

Don't understand the list sentence -- what started this thread is that
when pydoc is installed but there's another (older) installed python
that is first on $PATH, pydoc breaks.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Fri Mar  2 21:34:31 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 2 Mar 2001 21:34:31 +0100
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 02, 2001 at 09:22:27AM -0500
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>
Message-ID: <20010302213431.Q9678@xs4all.nl>

On Fri, Mar 02, 2001 at 09:22:27AM -0500, Guido van Rossum wrote:

> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88

> Most quotable part: "Python is a language you can get into on one
> battery!"

Actually, I think this bit is more important:

"I remember Guido trying to persuade me to use python as I was trying to
persuade him to write web software!"

So when can we expect the new Python web interface ? :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at acm.org  Fri Mar  2 21:32:27 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 2 Mar 2001 15:32:27 -0500 (EST)
Subject: [Python-Dev] doc tree frozen for 2.1b1
Message-ID: <15008.859.4988.155789@localhost.localdomain>

  The documentation is frozen until the 2.1b1 annonucement goes out.
I have a couple of checkins to make, but the formatted HTML for the
Windows installer has already been cut & shipped.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Fri Mar  2 21:41:34 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 15:41:34 -0500
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of "Fri, 02 Mar 2001 21:34:31 +0100."
             <20010302213431.Q9678@xs4all.nl> 
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com>  
            <20010302213431.Q9678@xs4all.nl> 
Message-ID: <200103022041.PAA12359@cj20424-a.reston1.va.home.com>

> Actually, I think this bit is more important:
> 
> "I remember Guido trying to persuade me to use python as I was trying to
> persuade him to write web software!"
> 
> So when can we expect the new Python web interface ? :-)

There's actually a bit of a sad story.  I really liked the early web,
and wrote one of the earliest graphical web browsers (before Mozilla;
I was using Python and stdwin).  But I didn't get the importance of
dynamic content, and initially scoffed at the original cgi.py,
concocted by Michael McLay (always a good nose for trends!) and Steven
Majewski (ditto).

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at acm.org  Fri Mar  2 21:49:09 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 2 Mar 2001 15:49:09 -0500 (EST)
Subject: [Python-Dev] Python 2.1 beta 1 documentation online
Message-ID: <15008.1861.84677.687041@localhost.localdomain>

  The documentation for Python 2.1 beta 1 is now online:

	http://python.sourceforge.net/devel-docs/

  This is the same as the documentation that will ship with the
Windows installer.
  This is the online location of the development version of the
documentation.  As I make updates to the documentation, this will be
updated periodically; the "front page" will indicate the date of the
most recent update.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Fri Mar  2 23:46:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 02 Mar 2001 17:46:09 -0500
Subject: [Python-Dev] Python 2.1b1 released
Message-ID: <200103022246.RAA18529@cj20424-a.reston1.va.home.com>

With great pleasure I announce the release of Python 2.1b1.  This is a
big step towards the release of Python 2.1; the final release is
expected to take place in mid April.

Find out all about 2.1b1, including docs and downloads (Windows
installer and source tarball), at the 2.1 release page:

    http://www.python.org/2.1/


WHAT'S NEW?
-----------

For the big picture, see Andrew Kuchling's What New in Python 2.1:

    http://www.amk.ca/python/2.1/

For more detailed release notes, see SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=25924

The big news since 2.1a2 was released a month ago:

- Nested Scopes (PEP 227)[*] are now optional.  They must be enabled
  by including the statement "from __future__ import nested_scopes" at
  the beginning of a module (PEP 236).  Nested scopes will be a
  standard feature in Python 2.2.

- Compile-time warnings are now generated for a number of conditions
  that will break or change in meaning when nested scopes are enabled.

- The new tool *pydoc* displays module documentation, extracted from
  doc strings.  It works in a text environment as well as in a GUI
  environment (where it cooperates with a web browser).  On Windows,
  this is in the Start menu as "Module Docs".

- Case-sensitive import.  On systems with case-insensitive but
  case-preserving file systems, such as Windows (including Cygwin) and
  MacOS, import now continues to search the next directory on sys.path
  when a case mismatch is detected.  See PEP 235 for the full scoop.

- New platforms.  Python 2.1 now fully supports MacOS X, Cygwin, and
  RISCOS.

[*] For PEPs (Python Enhancement Proposals), see the PEP index:

    http://python.sourceforge.net/peps/

I hope to see you all next week at the Python9 conference in Long
Beach, CA:

    http://www.python9.org

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Sat Mar  3 19:21:44 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 3 Mar 2001 13:21:44 -0500 (EST)
Subject: [Python-Dev] Bug fix releases (was Re: Nested scopes resolution -- you can breathe again!)
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org>
Message-ID: <200103031821.NAA24060@panix3.panix.com>

[posted to c.l.py with cc to python-dev]

[I apologize for the delay in posting this, but it's taken me some time
to get my thoughts straight.  I hope that by posting this right before
IPC9 there'll be a chance to get some good discussion in person.]

In article <mailman.982897324.9109.python-list at python.org>,
Guido van Rossum  <guido at digicool.com> wrote:
>
>We have clearly underestimated how much code the nested scopes would
>break, but more importantly we have underestimated how much value our
>community places on stability.  

I think so, yes, on that latter clause.  I think perhaps it wasn't clear
at the time, but I believe that much of the yelling over "print >>" was
less over the specific design but because it came so close to the
release of 2.0 that there wasn't *time* to sit down and talk things
over rationally.

As I see it, there's a natural tension between between adding features
and delivering bug fixes.  Particularly because of Microsoft, I think
that upgrading to a feature release to get bug fixes has become anathema
to a lot of people, and I think that seeing features added or changed
close to a release reminds people too much of the Microsoft upgrade
treadmill.

>So here's the deal: we'll make nested scopes an optional feature in
>2.1, default off, selectable on a per-module basis using a mechanism
>that's slightly hackish but is guaranteed to be safe.  (See below.)
>
>At the same time, we'll augment the compiler to detect all situations
>that will break when nested scopes are introduced in the future, and
>issue warnings for those situations.  The idea here is that warnings
>don't break code, but encourage folks to fix their code so we can
>introduce nested scopes in 2.2.  Given our current pace of releases
>that should be about 6 months warning.

As some other people have pointed out, six months is actually a rather
short cycle when it comes to delivering enterprise applications across
hundreds or thousands of machines.  Notice how many people have said
they haven't upgraded from 1.5.2 yet!  Contrast that with the quickness
of the 1.5.1 to 1.5.2 upgrade.

I believe that "from __future__" is a good idea, but it is at best a
bandage over the feature/bug fix tension.  I think that the real issue
is that in the world of core Python development, release N is always a
future release, never the current release; as soon as release N goes out
the door into production, it immediately becomes release N-1 and forever
dead to development

Rather than change that mindset directly, I propose that we move to a
forked model of development.  During the development cycle for any given
release, release (N-1).1 is also a live target -- but strictly for bug
fixes.  I suggest that shortly after the release for Na1, there should
also be a release for (N-1).1b1; shortly after the release of Nb1, there
would be (N-1).1b2.  And (N-1).1 would be released shortly after N.

This means that each feature-based release gets one-and-only-one pure
bugfix release.  I think this will do much to promote the idea of Python
as a stable platform for application development.

There are a number of ways I can see this working, including setting up
a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
But I don't think this will work at all unless the PythonLabs team is at
least willing to "bless" the bugfix release.  Uncle Timmy has been known
to make snarky comments about forever maintaining 1.5.2; I think this is
a usable compromise that will take relatively little effort to keep
going once it's set up.

I think one key advantage of this approach is that a lot more people
will be willing to try out a beta of a strict bugfix release, so the
release N bugfixes will get more testing than they otherwise would.

If there's interest in this idea, I'll write it up as a formal PEP.

It's too late for my proposed model to work during the 2.1 release
cycle, but I think it would be an awfully nice gesture to the community
to take a month off after 2.1 to create 2.0.1, before going on to 2.2.



BTW, you should probably blame Fredrik for this idea.  ;-)  If he had
skipped providing 1.5.2 and 2.0 versions of sre, I probably wouldn't
have considered this a workable idea.  I was just thinking that it was
too bad there wasn't a packaged version of 2.0 containing the new sre,
and that snowballed into this.
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be



From guido at digicool.com  Sat Mar  3 20:10:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 14:10:35 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 13:21:44 EST."
             <200103031821.NAA24060@panix3.panix.com> 
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org>  
            <200103031821.NAA24060@panix3.panix.com> 
Message-ID: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>

Aahz writes:
> [posted to c.l.py with cc to python-dev]
> 
> [I apologize for the delay in posting this, but it's taken me some time
> to get my thoughts straight.  I hope that by posting this right before
> IPC9 there'll be a chance to get some good discussion in person.]

Excellent.  Even in time for me to mention this in my keynote! :-)

> In article <mailman.982897324.9109.python-list at python.org>,
> Guido van Rossum  <guido at digicool.com> wrote:
> >
> >We have clearly underestimated how much code the nested scopes would
> >break, but more importantly we have underestimated how much value our
> >community places on stability.  
> 
> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
> at the time, but I believe that much of the yelling over "print >>" was
> less over the specific design but because it came so close to the
> release of 2.0 that there wasn't *time* to sit down and talk things
> over rationally.

In my eyes the issues are somewhat different: "print >>" couldn't
possibly break existing code; nested scopes clearly do, and that's why
we decided to use the __future__ statement.

But I understand that you're saying that the community has grown so
conservative that it can't stand new features even if they *are* fully
backwards compatible.

I wonder, does that extend to new library modules?  Is there also
resistance against the growth there?  I don't think so -- if anything,
people are clamoring for more stuff to become standard (while at the
same time I feel some pressure to cut dead wood, like the old SGI
multimedia modules).

So that relegates us at PythonLabs to a number of things: coding new
modules (boring), or trying to improve performance of the virtual
machine (equally boring, and difficult to boot), or fixing bugs (did I
mention boring? :-).

So what can we do for fun?  (Besides redesigning Zope, which is lots
of fun, but runs into the same issues.)

> As I see it, there's a natural tension between between adding features
> and delivering bug fixes.  Particularly because of Microsoft, I think
> that upgrading to a feature release to get bug fixes has become anathema
> to a lot of people, and I think that seeing features added or changed
> close to a release reminds people too much of the Microsoft upgrade
> treadmill.

Actually, I though that the Microsoft way these days was to smuggle
entire new subsystems into bugfix releases.  What else are "Service
Packs" for? :-)

> >So here's the deal: we'll make nested scopes an optional feature in
> >2.1, default off, selectable on a per-module basis using a mechanism
> >that's slightly hackish but is guaranteed to be safe.  (See below.)
> >
> >At the same time, we'll augment the compiler to detect all situations
> >that will break when nested scopes are introduced in the future, and
> >issue warnings for those situations.  The idea here is that warnings
> >don't break code, but encourage folks to fix their code so we can
> >introduce nested scopes in 2.2.  Given our current pace of releases
> >that should be about 6 months warning.
> 
> As some other people have pointed out, six months is actually a rather
> short cycle when it comes to delivering enterprise applications across
> hundreds or thousands of machines.  Notice how many people have said
> they haven't upgraded from 1.5.2 yet!  Contrast that with the quickness
> of the 1.5.1 to 1.5.2 upgrade.

Clearly, we're taking this into account.  If we believed you all
upgraded the day we announced a new release, we'd be even more
conservative with adding new features (at least features introducing
incompatibilities).

> I believe that "from __future__" is a good idea, but it is at best a
> bandage over the feature/bug fix tension.  I think that the real issue
> is that in the world of core Python development, release N is always a
> future release, never the current release; as soon as release N goes out
> the door into production, it immediately becomes release N-1 and forever
> dead to development
> 
> Rather than change that mindset directly, I propose that we move to a
> forked model of development.  During the development cycle for any given
> release, release (N-1).1 is also a live target -- but strictly for bug
> fixes.  I suggest that shortly after the release for Na1, there should
> also be a release for (N-1).1b1; shortly after the release of Nb1, there
> would be (N-1).1b2.  And (N-1).1 would be released shortly after N.

Your math at first confused the hell out of me, but I see what you
mean.  You want us to spend time on 2.0.1 which should be a bugfix
release for 2.0, while at the same time working on 2.1 which is a new
feature release.

Guess what -- I am secretly (together with the PSU) planning a 2.0.1
release.  I'm waiting however for obtaining the ownership rights to
the 2.0 release, so we can fix the GPL incompatibility issue in the
license at the same time.  (See the 1.6.1 release.)  I promise that
2.0.1, unlike 1.6.1, will contain more than a token set of real
bugfixes.  Hey, we already have a branch in the CVS tree for 2.0.1
development!  (Tagged "release20-maint".)

We could use some checkins on that branch though.

> This means that each feature-based release gets one-and-only-one pure
> bugfix release.  I think this will do much to promote the idea of Python
> as a stable platform for application development.

Anything we can do to please those republicans! :-)

> There are a number of ways I can see this working, including setting up
> a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
> But I don't think this will work at all unless the PythonLabs team is at
> least willing to "bless" the bugfix release.  Uncle Timmy has been known
> to make snarky comments about forever maintaining 1.5.2; I think this is
> a usable compromise that will take relatively little effort to keep
> going once it's set up.

With the CVS branch it's *trivial* to keep it going.  We should have
learned from the Tcl folks, they've had 8.NpM releases for a while.

> I think one key advantage of this approach is that a lot more people
> will be willing to try out a beta of a strict bugfix release, so the
> release N bugfixes will get more testing than they otherwise would.

Wait a minute!  Now you're making it too complicated.  Betas of bugfix
releases?  That seems to defeat the purpose.  What kind of
beta-testing does a pure bugfix release need?  Presumably each
individual bugfix applied has already been tested before it is checked
in!  Or are you thinking of adding small new features to a "bugfix"
release?  That ought to be a no-no according to your own philosophy!

> If there's interest in this idea, I'll write it up as a formal PEP.

Please do.

> It's too late for my proposed model to work during the 2.1 release
> cycle, but I think it would be an awfully nice gesture to the community
> to take a month off after 2.1 to create 2.0.1, before going on to 2.2.

It's not too late, as I mentioned.  We'll also do this for 2.1.

> BTW, you should probably blame Fredrik for this idea.  ;-)  If he had
> skipped providing 1.5.2 and 2.0 versions of sre, I probably wouldn't
> have considered this a workable idea.  I was just thinking that it was
> too bad there wasn't a packaged version of 2.0 containing the new sre,
> and that snowballed into this.

So the new (2.1) sre code should be merged back into 2.0.1, right?
Fredrik, go ahead!  We'll start planning for the 2.0.1 release right
after we're back from the conference.

BTW, See you at the conference!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at acm.org  Sat Mar  3 20:30:13 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:30:13 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<200103031910.OAA21663@cj20424-a.reston1.va.home.com>
Message-ID: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > I wonder, does that extend to new library modules?  Is there also
 > resistance against the growth there?  I don't think so -- if anything,
 > people are clamoring for more stuff to become standard (while at the

  There is still the issue of name clashes; introducing a new module
in the top-level namespace introduces a potential conflict with
someone's application-specific modules.  This is a good reason for us
to get the standard library packagized sooner rather than later
(although this would have to be part of a "feature" release;).

 > Wait a minute!  Now you're making it too complicated.  Betas of bugfix
 > releases?  That seems to defeat the purpose.  What kind of

  Betas of the bugfix releases are important -- portability testing is
fairly difficult to do when all we have are Windows and Linux/x86
boxes.  There's definately a need for at least one beta.  We probably
don't need to lengthy, multi-phase alpha/alpha/beta/beta/candidate
cycle we're using for feature releases now.

 > It's not too late, as I mentioned.  We'll also do this for 2.1.

  Managing the bugfix releases would also be an excellent task for
someone who's expecting to use the bugfix releases more than the
feature releases -- the mentality has to be right for the task.  I
know I'm much more of a "features" person, and would have a hard time
not crossing the line if it were up to me what went into a bugfix
release.

 > BTW, See you at the conference!

  If we don't get snowed in!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Sat Mar  3 20:44:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 14:44:19 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 14:30:13 EST."
             <15009.17989.88203.844343@cj42289-a.reston1.va.home.com> 
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <200103031910.OAA21663@cj20424-a.reston1.va.home.com>  
            <15009.17989.88203.844343@cj42289-a.reston1.va.home.com> 
Message-ID: <200103031944.OAA21835@cj20424-a.reston1.va.home.com>

> Guido van Rossum writes:
>  > I wonder, does that extend to new library modules?  Is there also
>  > resistance against the growth there?  I don't think so -- if anything,
>  > people are clamoring for more stuff to become standard (while at the
> 
>   There is still the issue of name clashes; introducing a new module
> in the top-level namespace introduces a potential conflict with
> someone's application-specific modules.  This is a good reason for us
> to get the standard library packagized sooner rather than later
> (although this would have to be part of a "feature" release;).

But of course the library repackaging in itself would cause enormous
outcries, because in a very real sense it *does* break code.

>  > Wait a minute!  Now you're making it too complicated.  Betas of bugfix
>  > releases?  That seems to defeat the purpose.  What kind of
> 
>   Betas of the bugfix releases are important -- portability testing is
> fairly difficult to do when all we have are Windows and Linux/x86
> boxes.  There's definately a need for at least one beta.  We probably
> don't need to lengthy, multi-phase alpha/alpha/beta/beta/candidate
> cycle we're using for feature releases now.

OK, you can have *one* beta.  That's it.

>  > It's not too late, as I mentioned.  We'll also do this for 2.1.
> 
>   Managing the bugfix releases would also be an excellent task for
> someone who's expecting to use the bugfix releases more than the
> feature releases -- the mentality has to be right for the task.  I
> know I'm much more of a "features" person, and would have a hard time
> not crossing the line if it were up to me what went into a bugfix
> release.

That's how all of us here at PythonLabs are feeling...  I feel a
community task coming.  I'll bless a 2.0.1 release and the general
idea of bugfix releases, but doing the grunt work won't be a
PythonLabs task.  Someone else inside or outside Python-dev will have
to do some work.  Aahz?

>  > BTW, See you at the conference!
> 
>   If we don't get snowed in!

Good point.  East coasters flying to LA on Monday, watch your weather
forecast!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at cj42289-a.reston1.va.home.com  Sat Mar  3 20:47:49 2001
From: fdrake at cj42289-a.reston1.va.home.com (Fred Drake)
Date: Sat,  3 Mar 2001 14:47:49 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010303194749.629AC28803@cj42289-a.reston1.va.home.com>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


Additional information on using non-Microsoft compilers on Windows when
using the Distutils, contributed by Rene Liebscher.




From tim.one at home.com  Sat Mar  3 20:55:09 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 3 Mar 2001 14:55:09 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>

[Fred L. Drake, Jr.]
> ...
>   Managing the bugfix releases would also be an excellent task for
> someone who's expecting to use the bugfix releases more than the
> feature releases -- the mentality has to be right for the task.  I
> know I'm much more of a "features" person, and would have a hard time
> not crossing the line if it were up to me what went into a bugfix
> release.

Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
nobody responded.  Past is prelude ...

everyone-is-generous-with-everyone-else's-time-ly y'rs  - tim




From fdrake at acm.org  Sat Mar  3 20:53:45 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:53:45 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
References: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
Message-ID: <15009.19401.787058.744462@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
 > serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
 > Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
 > nobody responded.  Past is prelude ...

  And as long as that continues, I'd have to conclude that the user
base is largely happy with the way we've done things.  *If* users want
bugfix releases badly enough, someone will do them.  If not, hey,
features can be useful!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From fdrake at acm.org  Sat Mar  3 20:54:31 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sat, 3 Mar 2001 14:54:31 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031944.OAA21835@cj20424-a.reston1.va.home.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<200103031910.OAA21663@cj20424-a.reston1.va.home.com>
	<15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
	<200103031944.OAA21835@cj20424-a.reston1.va.home.com>
Message-ID: <15009.19447.154958.449303@cj42289-a.reston1.va.home.com>

Guido van Rossum writes:
 > But of course the library repackaging in itself would cause enormous
 > outcries, because in a very real sense it *does* break code.

  That's why it has to be a feature release.  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Sat Mar  3 21:07:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 15:07:09 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 14:55:09 EST."
             <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com> 
Message-ID: <200103032007.PAA21925@cj20424-a.reston1.va.home.com>

> [Fred L. Drake, Jr.]
> > ...
> >   Managing the bugfix releases would also be an excellent task for
> > someone who's expecting to use the bugfix releases more than the
> > feature releases -- the mentality has to be right for the task.  I
> > know I'm much more of a "features" person, and would have a hard time
> > not crossing the line if it were up to me what went into a bugfix
> > release.

[Uncle Timmy]
> Note there was never a bugfix release for 1.5.2, despite that 1.5.2 had some
> serious bugs, and that 1.5.2 was current for an unprecedentedly long time.
> Guido put out a call for volunteers to produce a 1.5.2 bugfix release, but
> nobody responded.  Past is prelude ...
> 
> everyone-is-generous-with-everyone-else's-time-ly y'rs  - tim

I understand the warning.  How about the following (and then I really
have to go write my keynote speech :-).  PythonLabs will make sure
that it will happen.  But how much stuff goes into the bugfix release
is up to the community.

We'll give SourceForge commit privileges to individuals who want to do
serious work on the bugfix branch -- but before you get commit
privileges, you must first show that you know what you are doing by
submitting useful patches through the SourceForge patch mananger.

Since a lot of the 2.0.1 effort will be deciding which code from 2.1
to merge back into 2.0.1, it may not make sense to upload context
diffs to SourceForge.  Instead, we'll accept reasoned instructions for
specific patches to be merged back.  Instructions like "cvs update
-j<rev1> -j<rev2> <file>" are very helpful; please also explain why!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Sat Mar  3 22:55:28 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 3 Mar 2001 16:55:28 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <mailman.983646726.27322.python-list@python.org>
Message-ID: <200103032155.QAA05049@panix3.panix.com>

In article <mailman.983646726.27322.python-list at python.org>,
Guido van Rossum  <guido at digicool.com> wrote:
>Aahz writes:
>>
>> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
>> at the time, but I believe that much of the yelling over "print >>" was
>> less over the specific design but because it came so close to the
>> release of 2.0 that there wasn't *time* to sit down and talk things
>> over rationally.
>
>In my eyes the issues are somewhat different: "print >>" couldn't
>possibly break existing code; nested scopes clearly do, and that's why
>we decided to use the __future__ statement.
>
>But I understand that you're saying that the community has grown so
>conservative that it can't stand new features even if they *are* fully
>backwards compatible.

Then you understand incorrectly.  There's a reason why I emphasized
"*time*" up above.  It takes time to grok a new feature, time to think
about whether and how we should argue in favor or against it, time to
write comprehensible and useful arguments.  In hindsight, I think you
probably did make the right design decision on "print >>", no matter how
ugly I think it looks.  But I still think you made absolutely the wrong
decision to include it in 2.0.

>So that relegates us at PythonLabs to a number of things: coding new
>modules (boring), or trying to improve performance of the virtual
>machine (equally boring, and difficult to boot), or fixing bugs (did I
>mention boring? :-).
>
>So what can we do for fun?  (Besides redesigning Zope, which is lots
>of fun, but runs into the same issues.)

Write new versions of Python.  You've come up with a specific protocol
in a later post that I think I approve of; I was trying to suggest a
balance between lots of grunt work maintenance and what I see as
perpetual language instability in the absence of any bug fix releases.

>Your math at first confused the hell out of me, but I see what you
>mean.  You want us to spend time on 2.0.1 which should be a bugfix
>release for 2.0, while at the same time working on 2.1 which is a new
>feature release.

Yup.  The idea is that because it's always an N and N-1 pair, the base
code is the same for both and applying patches to both should be
(relatively speaking) a small amount of extra work.  Most of the work
lies in deciding *which* patches should go into N-1.

>Guess what -- I am secretly (together with the PSU) planning a 2.0.1
>release.  I'm waiting however for obtaining the ownership rights to
>the 2.0 release, so we can fix the GPL incompatibility issue in the
>license at the same time.  (See the 1.6.1 release.)  I promise that
>2.0.1, unlike 1.6.1, will contain more than a token set of real
>bugfixes.  Hey, we already have a branch in the CVS tree for 2.0.1
>development!  (Tagged "release20-maint".)

Yay!  (Sorry, I'm not much of a CVS person; the one time I tried using
it, I couldn't even figure out where to download the software.  Call me
stupid.)

>We could use some checkins on that branch though.

Fair enough.

>> This means that each feature-based release gets one-and-only-one pure
>> bugfix release.  I think this will do much to promote the idea of Python
>> as a stable platform for application development.
>
>Anything we can do to please those republicans! :-)

<grin>

>> There are a number of ways I can see this working, including setting up
>> a separate project at SourceForge (e.g. pythonpatch.sourceforge.net).
>> But I don't think this will work at all unless the PythonLabs team is at
>> least willing to "bless" the bugfix release.  Uncle Timmy has been known
>> to make snarky comments about forever maintaining 1.5.2; I think this is
>> a usable compromise that will take relatively little effort to keep
>> going once it's set up.
>
>With the CVS branch it's *trivial* to keep it going.  We should have
>learned from the Tcl folks, they've had 8.NpM releases for a while.

I'm suggesting having one official PythonLabs-created bug fix release as
being a small incremental effort over the work in the feature release.
But if you want it to be an entirely community-driven effort, I can't
argue with that.

My one central point is that I think this will fail if PythonLabs
doesn't agree to formally certify each release.

>> I think one key advantage of this approach is that a lot more people
>> will be willing to try out a beta of a strict bugfix release, so the
>> release N bugfixes will get more testing than they otherwise would.
>
>Wait a minute!  Now you're making it too complicated.  Betas of bugfix
>releases?  That seems to defeat the purpose.  What kind of
>beta-testing does a pure bugfix release need?  Presumably each
>individual bugfix applied has already been tested before it is checked
>in!  

"The difference between theory and practice is that in theory, there is
no difference, but in practice, there is."

I've seen too many cases where a bugfix introduced new bugs somewhere
else.  Even if "tested", there might be a border case where an
unexpected result shows up.  Finally, there's the issue of system
testing, making sure the entire package of bugfixes works correctly.

The main reason I suggested two betas was to "lockstep" the bugfix
release to the next version's feature release.

>Or are you thinking of adding small new features to a "bugfix"
>release?  That ought to be a no-no according to your own philosophy!

That's correct.  One problem, though, is that sometimes it's a little
difficult to agree on whether a particular piece of code is a feature or
a bugfix.  For example, the recent work to resolve case-sensitive
imports could be argued either way -- and if we want Python 2.0 to run
on OS X, we'd better decide that it's a bugfix.  ;-)

>> If there's interest in this idea, I'll write it up as a formal PEP.
>
>Please do.

Okay, I'll do it after the conference.  I've e-mailed Barry to ask for a
PEP number.
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be
-- 
                      --- Aahz (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het    <*>     http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Nostalgia just ain't what it used to be



From guido at digicool.com  Sat Mar  3 23:18:45 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 03 Mar 2001 17:18:45 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: Your message of "Sat, 03 Mar 2001 16:55:28 EST."
             <200103032155.QAA05049@panix3.panix.com> 
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <mailman.983646726.27322.python-list@python.org>  
            <200103032155.QAA05049@panix3.panix.com> 
Message-ID: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>

[Aahz]
> >> I think so, yes, on that latter clause.  I think perhaps it wasn't clear
> >> at the time, but I believe that much of the yelling over "print >>" was
> >> less over the specific design but because it came so close to the
> >> release of 2.0 that there wasn't *time* to sit down and talk things
> >> over rationally.

[Guido]
> >In my eyes the issues are somewhat different: "print >>" couldn't
> >possibly break existing code; nested scopes clearly do, and that's why
> >we decided to use the __future__ statement.
> >
> >But I understand that you're saying that the community has grown so
> >conservative that it can't stand new features even if they *are* fully
> >backwards compatible.

[Aahz]
> Then you understand incorrectly.  There's a reason why I emphasized
> "*time*" up above.  It takes time to grok a new feature, time to think
> about whether and how we should argue in favor or against it, time to
> write comprehensible and useful arguments.  In hindsight, I think you
> probably did make the right design decision on "print >>", no matter how
> ugly I think it looks.  But I still think you made absolutely the wrong
> decision to include it in 2.0.

Then I respectfully disagree.  We took plenty of time to discuss
"print >>" amongst ourselves.  I don't see the point of letting the
whole community argue about every little new idea before we include it
in a release.  We want good technical feedback, of course.  But if it
takes time to get emotionally used to an idea, you can use your own
time.

> >With the CVS branch it's *trivial* to keep it going.  We should have
> >learned from the Tcl folks, they've had 8.NpM releases for a while.
> 
> I'm suggesting having one official PythonLabs-created bug fix release as
> being a small incremental effort over the work in the feature release.
> But if you want it to be an entirely community-driven effort, I can't
> argue with that.

We will surely put in an effort, but we're limited in what we can do,
so I'm inviting the community to pitch in.  Even just a wish-list of
fixes that are present in 2.1 that should be merged back into 2.0.1
would help!

> My one central point is that I think this will fail if PythonLabs
> doesn't agree to formally certify each release.

Of course we will do that -- I already said so.  And not just for
2.0.1 -- for all bugfix releases, as long as they make sense.

> I've seen too many cases where a bugfix introduced new bugs somewhere
> else.  Even if "tested", there might be a border case where an
> unexpected result shows up.  Finally, there's the issue of system
> testing, making sure the entire package of bugfixes works correctly.

I hope that the experience with 2.1 will validate most bugfixes that
go into 2.0.1.

> The main reason I suggested two betas was to "lockstep" the bugfix
> release to the next version's feature release.

Unclear what you want there.  Why tie the two together?  How?

> >Or are you thinking of adding small new features to a "bugfix"
> >release?  That ought to be a no-no according to your own philosophy!
> 
> That's correct.  One problem, though, is that sometimes it's a little
> difficult to agree on whether a particular piece of code is a feature or
> a bugfix.  For example, the recent work to resolve case-sensitive
> imports could be argued either way -- and if we want Python 2.0 to run
> on OS X, we'd better decide that it's a bugfix.  ;-)

But the Windows change is clearly a feature, so that can't be added to
2.0.1.  We'll have to discuss this particular one.  If 2.0 doesn't
work on MacOS X now, why couldn't MacOS X users install 2.1?  They
can't have working code that breaks, can they?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Sun Mar  4 06:18:05 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 4 Mar 2001 00:18:05 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGJDAA.tim.one@home.com>

FYI, in reviewing Misc/HISTORY, it appears that the last Python release
*called* a "pure bugfix release" was in November of 1994 (1.1.1) -- although
"a few new features were added to tkinter" anyway.

fine-by-me-if-we-just-keep-up-the-good-work<wink>-ly y'rs  - tim




From tim.one at home.com  Sun Mar  4 07:00:44 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 4 Mar 2001 01:00:44 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEMHJDAA.tim.one@home.com>

[Aahz]
> ...
> For example, the recent work to resolve case-sensitive imports could
> be argued either way -- and if we want Python 2.0 to run on OS X,
> we'd better decide that it's a bugfix.  ;-)

[Guido]
> But the Windows change is clearly a feature,

Yes.

> so that can't be added to 2.0.1.

That's what Aahz is debating.

> We'll have to discuss this particular one.  If 2.0 doesn't
> work on MacOS X now, why couldn't MacOS X users install 2.1?  They
> can't have working code that breaks, can they?

You're a Giant Corporation that ships a multi-platform product, including
Python 2.0.  Since your IT dept is frightened of its own shadow, they won't
move to 2.1.  Since there is no bound to your greed, you figure that even if
there are only a dozen MacOS X users in the world, you could make 10 bucks
off of them if only you can talk PythonLabs into treating the lack of 2.0
MacOS X support as "a bug", getting PythonLabs to backstitch the port into a
2.0 follow-on (*calling* it 2.0.x serves to pacify your IT paranoids).  No
cost to you, and 10 extra dollars in your pocket.  Everyone wins <wink>.

There *are* some companies so unreasonable in their approach.  Replace "a
dozen" and "10 bucks" by much higher numbers, and the number of companies
mushrooms accordingly.

If we put out a release that actually did nothing except fix legitimate bugs,
PythonLabs may have enough fingers to count the number of downloads.  For
example, keen as *I* was to see a bugfix release for the infamous 1.5.2
"invalid tstate" bug, I didn't expect anyone would pick it up except for Mark
Hammond and the other guy who bumped into it (it was very important to them).
Other people simply won't pick it up unless and until they bump into the bug
it fixes, and due to the same "if it's not obviously broken, *any* change is
dangerous" fear that motivates everyone clinging to old releases by choice.

Curiously, I eventually got my Win95 box into a state where it routinely ran
for a solid week without crashing (the MTBF at the end was about 100x higher
than when I got the machine).  I didn't do that by avoiding MS updates, but
by installing *every* update they offered ASAP, even for subsystems I had no
intention of ever using.  That's the contrarian approach to keeping your
system maximally stable, relying on the observation that the code that works
best is extremely likely to be the code that the developers use themselves.

If someone thinks there's a market for Python bugfix releases that's worth
more than it costs, great -- they can get filthy rich off my appalling lack
of vision <wink>.

"worth-more-than-it-costs"-is-key-ly y'rs  - tim




From tim.one at home.com  Sun Mar  4 07:50:58 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 4 Mar 2001 01:50:58 -0500
Subject: [Python-Dev] a small C style question
In-Reply-To: <05f101c0a2f3$cf4bae10$e46940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEMLJDAA.tim.one@home.com>

[Fredrik Lundh]
> DEC's OpenVMS compiler are a bit pickier than most other compilers.
> among other things, it correctly notices that the "code" variable in
> this statement is an unsigned variable:
>
>     UNICODEDATA:
>
>         if (code < 0 || code >= 65536)
>     ........^
>     %CC-I-QUESTCOMPARE, In this statement, the unsigned
>     expression "code" is being compared with a relational
>     operator to a constant whose value is not greater than
>     zero.  This might not be what you intended.
>     at line number 285 in file UNICODEDATA.C
>
> the easiest solution would of course be to remove the "code < 0"
> part, but code is a Py_UCS4 variable.  what if someone some day
> changes Py_UCS4 to a 64-bit signed integer, for example?
>
> what's the preferred style?
>
> 1) leave it as is, and let OpenVMS folks live with the
> compiler complaint
>
> 2) get rid of "code < 0" and hope that nobody messes
> up the Py_UCS4 declaration
>
> 3) cast "code" to a known unsigned type, e.g:
>
>         if ((unsigned int) code >= 65536)

#2.  The comment at the declaration of Py_UCS4 insists that an unsigned type
be used:

/*
 * Use this typedef when you need to represent a UTF-16 surrogate pair
 * as single unsigned integer.
             ^^^^^^^^
 */
#if SIZEOF_INT >= 4
typedef unsigned int Py_UCS4;
#elif SIZEOF_LONG >= 4
typedef unsigned long Py_UCS4;
#endif

If someone needs to boost that to a 64-bit int someday (hard to imagine ...),
they can boost it to an unsigned 64-bit int just as well.

If you really need to cater to impossibilities <0.5 wink>, #define a
Py_UCS4_IN_RANGE macro next to the typedef, and use the macro instead.




From gmcm at hypernet.com  Sun Mar  4 16:54:50 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sun, 4 Mar 2001 10:54:50 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEMHJDAA.tim.one@home.com>
References: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
Message-ID: <3AA21EFA.30660.4C134459@localhost>

[Tim justifies one-release-back mentality]
> You're a Giant Corporation that ships a multi-platform product,
> including Python 2.0.  Since your IT dept is frightened of its
> own shadow, they won't move to 2.1.  Since there is no bound to
> your greed, you figure that even if there are only a dozen MacOS
> X users in the world, you could make 10 bucks off of them if only
> you can talk PythonLabs into treating the lack of 2.0 MacOS X
> support as "a bug", getting PythonLabs to backstitch the port
> into a 2.0 follow-on (*calling* it 2.0.x serves to pacify your IT
> paranoids).  No cost to you, and 10 extra dollars in your pocket.
>  Everyone wins <wink>.

There is a curious psychology involved. I've noticed that a 
significant number of people (roughly 30%) always download 
an older release.

Example: Last week I announced a new release (j) of Installer. 
70% of the downloads were for that release.

There is only one previous Python 2 version of Installer 
available, but of people downloading a Python 2 version, 17% 
chose the older (I always send people to the html page, and 
none of the referrers shows a direct link - so this was a 
concious decision).

Of people downloading a 1.5.2 release (15% of total), 69% 
chose the latest, and 31% chose an older. This is the stable 
pattern (the fact that 83% of Python 2 users chose the latest 
is skewed by the fact that this was the first week it was 
available).

Since I yank a release if it turns out to introduce bugs, these 
people are not downloading older because they've heard it 
"works better". The interface has hardly changed in the entire 
span of available releases, so these are not people avoiding 
learning something new.

These are people who are simply highly resistent to anything 
new, with no inclination to test their assumptions against 
reality.

As Guido said, Republicans :-). 


- Gordon



From thomas at xs4all.net  Mon Mar  5 01:16:55 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 5 Mar 2001 01:16:55 +0100
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103031910.OAA21663@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sat, Mar 03, 2001 at 02:10:35PM -0500
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com> <mailman.982897324.9109.python-list@python.org> <200103031821.NAA24060@panix3.panix.com> <200103031910.OAA21663@cj20424-a.reston1.va.home.com>
Message-ID: <20010305011655.V9678@xs4all.nl>

On Sat, Mar 03, 2001 at 02:10:35PM -0500, Guido van Rossum wrote:

> But I understand that you're saying that the community has grown so
> conservative that it can't stand new features even if they *are* fully
> backwards compatible.

There is an added dimension, especially with Python. Bugs in the new
features. If it entails changes in the compiler or VM (like import-as, which
changed the meaning of FROM_IMPORT and added a IMPORT_STAR opcode) or if
modules get augmented to use the new features, these changes can introduce
bugs into existing code that doesn't even use the new features itself.

> I wonder, does that extend to new library modules?  Is there also
> resistance against the growth there?  I don't think so -- if anything,
> people are clamoring for more stuff to become standard (while at the
> same time I feel some pressure to cut dead wood, like the old SGI
> multimedia modules).

No (yes), bugfix releases should fix bugs, not add features (nor remove
them). Modules in the std lib are just features.

> So that relegates us at PythonLabs to a number of things: coding new
> modules (boring), or trying to improve performance of the virtual
> machine (equally boring, and difficult to boot), or fixing bugs (did I
> mention boring? :-).

How can you say this ? Okay, so *fixing* bugs isn't terribly exciting, but
hunting them down is one of the best sports around. Same for optimizations:
rewriting the code might be boring (though if you are a fast typist, it
usually doesn't take long enough to get boring :) but thinking them up is
the fun part. 

But who said PythonLabs had to do all the work ? You guys didn't do all the
work in 2.0->2.1, did you ? Okay, so most of the major features are written
by PythonLabs, and most of the decisions are made there, but there's no real
reason for it. Consider the Linux kernel: Linus Torvalds releases the
kernels in the devel 'tree' and usually the first few kernels in the
'stable' tree, and then Alan Cox takes over the stable tree and continues
it. (Note that this analogy isn't quite correct: the stable tree often
introduces new features, new drivers, etc, but avoids real incompatibilites
and usually doesn't require extra upgrades of tools and such.)

I hope you don't think any less of me if I volunteer *again* :-) but I'm
perfectly willing to maintain the bugfix release(s). I also don't think we
should necessarily stay at a single bugfix release. Whether or not a 'beta'
for the bugfix release is necessary, I'm not sure. I don't think so, at
least not if you release multiple bugfix releases. 

Holiday-Greetings-from-Long-Beach-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jeremy at alum.mit.edu  Sun Mar  4 00:32:32 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sat, 3 Mar 2001 18:32:32 -0500 (EST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <20010305011655.V9678@xs4all.nl>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<200103031910.OAA21663@cj20424-a.reston1.va.home.com>
	<20010305011655.V9678@xs4all.nl>
Message-ID: <15009.32528.29406.232901@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

  [GvR:]
  >> So that relegates us at PythonLabs to a number of things: coding
  >> new modules (boring), or trying to improve performance of the
  >> virtual machine (equally boring, and difficult to boot), or
  >> fixing bugs (did I mention boring? :-).

  TW> How can you say this ? Okay, so *fixing* bugs isn't terribly
  TW> exciting, but hunting them down is one of the best sports
  TW> around. Same for optimizations: rewriting the code might be
  TW> boring (though if you are a fast typist, it usually doesn't take
  TW> long enough to get boring :) but thinking them up is the fun
  TW> part.

  TW> But who said PythonLabs had to do all the work ? You guys didn't
  TW> do all the work in 2.0->2.1, did you ? Okay, so most of the
  TW> major features are written by PythonLabs, and most of the
  TW> decisions are made there, but there's no real reason for
  TW> it.

Most of the work I did for Python 2.0 was fixing bugs.  It was a lot
of fairly tedious but necessary work.  I have always imagined that
this was work that most people wouldn't do unless they were paid to do
it.  (python-dev seems to have a fair number of exceptions, though.)

Working on major new features has a lot more flash, so I imagine that
volunteers would be more inclined to help.  Neil's work on GC or yours
on augmented assignment are examples.

There's nothing that says we have to do all the work.  In fact, I
imagine we'll continue to collectively spend a lot of time on
maintenance issues.  We get paid to do it, and we get to hack on Zope
and ZODB the rest of the time, which is also a lot of fun.

Jeremy



From jack at oratrix.nl  Mon Mar  5 11:47:17 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 05 Mar 2001 11:47:17 +0100
Subject: [Python-Dev] os module UserDict
Message-ID: <20010305104717.A5104373C95@snelboot.oratrix.nl>

Importing os has started failing on the Mac since the riscos mods are in 
there, it tries to use UserDict without having imported it first.

I think that the problem is that the whole _Environ stuff should be inside the 
else part of the try/except, but I'm not sure I fully understand what goes on. 
Could whoever did these mods have a look?

Also, it seems that the whole if name != "riscos" is a bit of a hack...
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++





From phil at river-bank.demon.co.uk  Mon Mar  5 17:15:13 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Mon, 05 Mar 2001 16:15:13 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
Message-ID: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>

Any chance of the attached small patch be applied to enable weak
references to functions?

It's particularly useful for lambda functions and closes the "very last
loophole where a programmer can cause a PyQt script to seg fault" :)

Phil
-------------- next part --------------
diff -ruN Python-2.1b1.orig/Include/funcobject.h Python-2.1b1/Include/funcobject.h
--- Python-2.1b1.orig/Include/funcobject.h	Thu Jan 25 20:06:58 2001
+++ Python-2.1b1/Include/funcobject.h	Mon Mar  5 13:00:58 2001
@@ -16,6 +16,7 @@
     PyObject *func_doc;
     PyObject *func_name;
     PyObject *func_dict;
+    PyObject *func_weakreflist;
 } PyFunctionObject;
 
 extern DL_IMPORT(PyTypeObject) PyFunction_Type;
diff -ruN Python-2.1b1.orig/Objects/funcobject.c Python-2.1b1/Objects/funcobject.c
--- Python-2.1b1.orig/Objects/funcobject.c	Thu Mar  1 06:06:37 2001
+++ Python-2.1b1/Objects/funcobject.c	Mon Mar  5 13:39:37 2001
@@ -245,6 +245,8 @@
 static void
 func_dealloc(PyFunctionObject *op)
 {
+	PyObject_ClearWeakRefs((PyObject *) op);
+
 	PyObject_GC_Fini(op);
 	Py_DECREF(op->func_code);
 	Py_DECREF(op->func_globals);
@@ -336,4 +338,7 @@
 	Py_TPFLAGS_DEFAULT | Py_TPFLAGS_GC, /*tp_flags*/
 	0,		/* tp_doc */
 	(traverseproc)func_traverse,	/* tp_traverse */
+	0,		/* tp_clear */
+	0,		/* tp_richcompare */
+	offsetof(PyFunctionObject, func_weakreflist)	/* tp_weaklistoffset */
 };

From thomas at xs4all.net  Tue Mar  6 00:28:50 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 6 Mar 2001 00:28:50 +0100
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>; from phil@river-bank.demon.co.uk on Mon, Mar 05, 2001 at 04:15:13PM +0000
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
Message-ID: <20010306002850.B9678@xs4all.nl>

On Mon, Mar 05, 2001 at 04:15:13PM +0000, Phil Thompson wrote:

> Any chance of the attached small patch be applied to enable weak
> references to functions?

It's probably best to upload it to SourceForge, even though it seems pretty
broken right now. Especially during the Python conference, posts are
terribly likely to fall into oblivion.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From skip at mojam.com  Tue Mar  6 01:33:05 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:33:05 -0600 (CST)
Subject: [Python-Dev] Who wants this GCC/Solaris bug report?
Message-ID: <15012.12353.311124.819970@beluga.mojam.com>

I was assigned the following bug report:

   http://sourceforge.net/tracker/?func=detail&aid=232787&group_id=5470&atid=105470

I made a pass through the code in question, made one change to posixmodule.c
that I thought appropriate (should squelch one warning) and some comments
about the other warnings.  I'm unable to actually test any changes since I
don't run Solaris, so I don't feel comfortable doing anything more.  Can
someone else take this one over?  In theory, my comments should help you
zero in on a fix faster (famous last words).

Skip




From skip at mojam.com  Tue Mar  6 01:41:50 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:41:50 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
References: <15009.17989.88203.844343@cj42289-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCEELHJDAA.tim.one@home.com>
Message-ID: <15012.12878.853762.563753@beluga.mojam.com>

    Tim> Note there was never a bugfix release for 1.5.2, despite that 1.5.2
    Tim> had some serious bugs, and that 1.5.2 was current for an
    Tim> unprecedentedly long time.  Guido put out a call for volunteers to
    Tim> produce a 1.5.2 bugfix release, but nobody responded.  Past is
    Tim> prelude ...

Yes, but 1.5.2 source was managed differently.  It was released while the
source was still "captive" to CNRI and the conversion to Sourceforge was
relatively speaking right before the 2.0 release and had the added
complication that it more-or-less coincided with the formation of
PythonLabs.  With the source tree where someone can easily branch it, I
think it's now feasible to create a bug fix branch and have someone
volunteer to manage additions to it (that is, be the filter that decides if
a code change is a bug fix or a new feature).

Skip



From skip at mojam.com  Tue Mar  6 01:48:33 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:48:33 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <200103032155.QAA05049@panix3.panix.com>
References: <LNBBLJKPBEHFEDALKOLCMEEPJBAA.tim.one@home.com>
	<mailman.982897324.9109.python-list@python.org>
	<200103031821.NAA24060@panix3.panix.com>
	<mailman.983646726.27322.python-list@python.org>
	<200103032155.QAA05049@panix3.panix.com>
Message-ID: <15012.13281.629270.275993@beluga.mojam.com>

    aahz> Yup.  The idea is that because it's always an N and N-1 pair, the
    aahz> base code is the same for both and applying patches to both should
    aahz> be (relatively speaking) a small amount of extra work.  Most of
    aahz> the work lies in deciding *which* patches should go into N-1.

The only significant problem I see is making sure submitted patches contain
just bug fixes or new features and not a mixture of the two.

    aahz> The main reason I suggested two betas was to "lockstep" the bugfix
    aahz> release to the next version's feature release.

I don't see any real reason to sync them.  There's no particular reason I
can think of why you couldn't have 2.1.1, 2.1.2 and 2.1.3 releases before
2.2.0 is released and not have any bugfix release coincident with 2.2.0.
Presumably, any bug fixes between the release of 2.1.3 and 2.2.0 would also
appear in the feature branch.  As long as there was someone willing to
manage a particular bug fix branch, such a branch could continue for a
relatively long ways, long past the next feature release.

Skip




From skip at mojam.com  Tue Mar  6 01:53:38 2001
From: skip at mojam.com (Skip Montanaro)
Date: Mon, 5 Mar 2001 18:53:38 -0600 (CST)
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <3AA21EFA.30660.4C134459@localhost>
References: <200103032218.RAA22308@cj20424-a.reston1.va.home.com>
	<3AA21EFA.30660.4C134459@localhost>
Message-ID: <15012.13586.201583.620776@beluga.mojam.com>

    Gordon> There is a curious psychology involved. I've noticed that a
    Gordon> significant number of people (roughly 30%) always download an
    Gordon> older release.

    Gordon> Example: Last week I announced a new release (j) of Installer.
    Gordon> 70% of the downloads were for that release.

    ...

    Gordon> Of people downloading a 1.5.2 release (15% of total), 69% 
    Gordon> chose the latest, and 31% chose an older. This is the stable 
    Gordon> pattern (the fact that 83% of Python 2 users chose the latest 
    Gordon> is skewed by the fact that this was the first week it was 
    Gordon> available).

Check your web server's referral logs.  I suspect a non-trivial fraction of
those 30% were coming via offsite links such as search engine referrals and
weren't even aware a new installer was available.

Skip



From gmcm at hypernet.com  Tue Mar  6 03:09:38 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 5 Mar 2001 21:09:38 -0500
Subject: [Python-Dev] Re: Bug fix releases
In-Reply-To: <15012.13586.201583.620776@beluga.mojam.com>
References: <3AA21EFA.30660.4C134459@localhost>
Message-ID: <3AA40092.13561.536C8052@localhost>

>     Gordon> Of people downloading a 1.5.2 release (15% of total),
>     69% Gordon> chose the latest, and 31% chose an older. This is
>     the stable Gordon> pattern (the fact that 83% of Python 2
>     users chose the latest Gordon> is skewed by the fact that
>     this was the first week it was Gordon> available).
[Skip] 
> Check your web server's referral logs.  I suspect a non-trivial
> fraction of those 30% were coming via offsite links such as
> search engine referrals and weren't even aware a new installer
> was available.

That's the whole point - these stats are from the referrals. My 
download directory is not indexed or browsable. I only 
announce the page with the download links on it. And sure 
enough, all downloads come from there.

- Gordon



From fdrake at acm.org  Mon Mar  5 17:15:27 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Mon, 5 Mar 2001 11:15:27 -0500 (EST)
Subject: [Python-Dev] XML runtime errors?
In-Reply-To: <01f701c01d05$0aa98e20$766940d5@hagrid>
References: <009601c01cf1$467458e0$766940d5@hagrid>
	<200009122155.QAA01452@cj20424-a.reston1.va.home.com>
	<01f701c01d05$0aa98e20$766940d5@hagrid>
Message-ID: <15011.48031.772007.248246@cj42289-a.reston1.va.home.com>

Fredrik Lundh writes:
 > how about adding:
 > 
 >     class XMLError(RuntimeError):
 >         pass

  Looks like someone already added Error for this.

 > > > what's wrong with "SyntaxError"?
 > > 
 > > That would be the wrong exception unless it's parsing Python source
 > > code.
 > 
 > gotta fix netrc.py then...

  And this still isn't done.  I've made changes in my working copy,
introducting a specific exception which carries useful information
(msg, filename, lineno), so that all syntax exceptions get this
information as well.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From martin at loewis.home.cs.tu-berlin.de  Tue Mar  6 08:22:58 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 6 Mar 2001 08:22:58 +0100
Subject: [Python-Dev] os module UserDict
Message-ID: <200103060722.f267Mwe01222@mira.informatik.hu-berlin.de>

> I think that the problem is that the whole _Environ stuff should be
> inside the else part of the try/except, but I'm not sure I fully
> understand what goes on.  Could whoever did these mods have a look?

I agree that this patch was broken; the _Environ stuff was in the else
part before. The change was committed by gvanrossum; the checkin
comment says that its author was dschwertberger. 

> Also, it seems that the whole if name != "riscos" is a bit of a
> hack...

I agree. What it seems to say is 'even though riscos does have a
putenv, we cannot/should not/must not wrap environ with a UserDict.'

I'd suggest to back-out this part of the patch, unless a consistent
story can be given RSN.

Regards,
Martin

P.S. os.py mentions an "import riscos". Where is that module?



From jack at oratrix.nl  Tue Mar  6 14:31:12 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 06 Mar 2001 14:31:12 +0100
Subject: [Python-Dev] __all__ in urllib
Message-ID: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>

The __all__ that was added to urllib recently causes me quite a lot of grief 
(This is "me the application programmer", not "me the macpython maintainer"). 
I have a module that extends urllib, and whereas what used to work was a 
simple "from urllib import *" plus a few override functions, but with this 
__all__ stuff that doesn't work anymore.

I started fixing up __all__, but then I realised that this is probably not the 
right solution. "from xxx import *" can really be used for two completely 
distinct cases. One is as a convenience, where the user doesn't want to prefix 
all references with xxx. but the other distinct case is in a module that is an 
extension of another module. In this second case you would really want to 
bypass this whole __all__ mechanism.

I think that the latter is a valid use case for import *, and that there 
should be some way to get this behaviour.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++





From skip at mojam.com  Tue Mar  6 14:51:49 2001
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 6 Mar 2001 07:51:49 -0600 (CST)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
Message-ID: <15012.60277.150431.237935@beluga.mojam.com>

    Jack> I started fixing up __all__, but then I realised that this is
    Jack> probably not the right solution. 

    Jack> One is as a convenience, where the user doesn't want to prefix all
    Jack> references with xxx. but the other distinct case is in a module
    Jack> that is an extension of another module. In this second case you
    Jack> would really want to bypass this whole __all__ mechanism.

    Jack> I think that the latter is a valid use case for import *, and that
    Jack> there should be some way to get this behaviour.

Two things come to mind.  One, perhaps a more careful coding of urllib to
avoid exposing names it shouldn't export would be a better choice.  Two,
perhaps those symbols that are not documented but that would be useful when
extending urllib functionality should be documented and added to __all__.

Here are the non-module names I didn't include in urllib.__all__:

    MAXFTPCACHE
    localhost
    thishost
    ftperrors
    noheaders
    ftpwrapper
    addbase
    addclosehook
    addinfo
    addinfourl
    basejoin
    toBytes
    unwrap
    splittype
    splithost
    splituser
    splitpasswd
    splitport
    splitnport
    splitquery
    splittag
    splitattr
    splitvalue
    splitgophertype
    always_safe
    getproxies_environment
    getproxies
    getproxies_registry
    test1
    reporthook
    test
    main

None are documented, so there are no guarantees if you use them (I have
subclassed addinfourl in the past myself).

Skip



From sjoerd at oratrix.nl  Tue Mar  6 17:19:11 2001
From: sjoerd at oratrix.nl (Sjoerd Mullender)
Date: Tue, 06 Mar 2001 17:19:11 +0100
Subject: [Python-Dev] Tim Berners-Lee likes Python
In-Reply-To: Your message of Fri, 02 Mar 2001 09:22:27 -0500.
             <200103021422.JAA06497@cj20424-a.reston1.va.home.com> 
References: <200103021422.JAA06497@cj20424-a.reston1.va.home.com> 
Message-ID: <20010306161912.54E9A301297@bireme.oratrix.nl>

At the meeting of W3C working groups last week in Cambridge, MA, I saw
that he used Python...

On Fri, Mar 2 2001 Guido van Rossum wrote:

> I was tickled when I found a quote from Tim Berners-Lee about Python
> here: http://www.w3.org/2000/10/swap/#L88
> 
> Most quotable part: "Python is a language you can get into on one
> battery!"
> 
> We should be able to use that for PR somewhere...
> 
> --Guido van Rossum (home page: http://www.python.org/~guido/)
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
> 

-- Sjoerd Mullender <sjoerd.mullender at oratrix.com>



From dietmar at schwertberger.de  Tue Mar  6 23:54:30 2001
From: dietmar at schwertberger.de (Dietmar Schwertberger)
Date: Tue, 6 Mar 2001 23:54:30 +0100 (GMT)
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <200103060722.f267Mwe01222@mira.informatik.hu-berlin.de>
Message-ID: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>

Hi Martin,

thanks for CC'ing to me.

On Tue 06 Mar, Martin v. Loewis wrote:
> > I think that the problem is that the whole _Environ stuff should be
> > inside the else part of the try/except, but I'm not sure I fully
> > understand what goes on.  Could whoever did these mods have a look?
> 
> I agree that this patch was broken; the _Environ stuff was in the else
> part before. The change was committed by gvanrossum; the checkin
> comment says that its author was dschwertberger. 
Yes, it's from me. Unfortunately a whitespace problem with me, my editor
and my diffutils required Guido to apply most of the patches manually...


> > Also, it seems that the whole if name != "riscos" is a bit of a
> > hack...
> 
> I agree. What it seems to say is 'even though riscos does have a
> putenv, we cannot/should not/must not wrap environ with a UserDict.'
> 
> I'd suggest to back-out this part of the patch, unless a consistent
> story can be given RSN.
In plat-riscos there is a different UserDict-like implementation of
environ which is imported at the top of os.py in the 'riscos' part.
'name != "riscos"' just avoids overriding this. Maybe it would have
been better to include riscosenviron._Environ into os.py, as this would
look - and be - less hacky?
I must admit, I didn't care much when I started with riscosenviron.py
by just copying UserDict.py last year.

The RISC OS implementation doesn't store any data itself but just
emulates a dictionary with getenv() and putenv().
This is more suitable for the use of the environment under RISC OS, as
it is used quite heavily for a lot of configuration data and may grow
to some hundred k quite easily. So it is undesirable to import all the
data at startup if it is not required really.
Also the environment is being used for communication between tasks
sometimes (the changes don't just affect subprocesses started later,
but all tasks) and so read access to environ should return the current
value.


And this is just _one_ of the points where RISC OS is different from
the rest of the world...


> Regards,
> Martin
> 
> P.S. os.py mentions an "import riscos". Where is that module?
riscosmodule.c lives in the RISCOS subdirectory together with all the
other RISC OS specific stuff needed for building the binaries.


Regards,

Dietmar

P.S.: How can I subscribe to python-dev (at least read-only)?
      I couldn't find a reference on python.org or Sourceforge.
P.P.S.: If you wonder what RISC OS is and why it is different:
        You may remember the 'Archimedes' from the british
        manufacturer Acorn. This was the first RISC OS computer...




From martin at loewis.home.cs.tu-berlin.de  Wed Mar  7 07:38:52 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 7 Mar 2001 07:38:52 +0100
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>
	(message from Dietmar Schwertberger on Tue, 6 Mar 2001 23:54:30 +0100
	(GMT))
References: <Marcel-1.53-0306225430-0b02%2U@schwertberger.freenet.de>
Message-ID: <200103070638.f276cqj01518@mira.informatik.hu-berlin.de>

> Yes, it's from me. Unfortunately a whitespace problem with me, my editor
> and my diffutils required Guido to apply most of the patches manually...

I see. What do you think about the patch included below? It also gives
you the default argument to os.getenv, which riscosmodule does not
have.

> In plat-riscos there is a different UserDict-like implementation of
> environ which is imported at the top of os.py in the 'riscos' part.
> 'name != "riscos"' just avoids overriding this. Maybe it would have
> been better to include riscosenviron._Environ into os.py, as this would
> look - and be - less hacky?

No, I think it is good to have the platform-specific code in platform
modules, and only merge them appropiately in os.py.

> P.S.: How can I subscribe to python-dev (at least read-only)?

You can't; it is by invitation only. You can find the archives at

http://mail.python.org/pipermail/python-dev/

Regards,
Martin

Index: os.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/os.py,v
retrieving revision 1.46
diff -u -r1.46 os.py
--- os.py	2001/03/06 15:26:07	1.46
+++ os.py	2001/03/07 06:31:34
@@ -346,17 +346,19 @@
     raise exc, arg
 
 
-if name != "riscos":
-    # Change environ to automatically call putenv() if it exists
-    try:
-        # This will fail if there's no putenv
-        putenv
-    except NameError:
-        pass
-    else:
-        import UserDict
+# Change environ to automatically call putenv() if it exists
+try:
+    # This will fail if there's no putenv
+    putenv
+except NameError:
+    pass
+else:
+    import UserDict
 
-    if name in ('os2', 'nt', 'dos'):  # Where Env Var Names Must Be UPPERCASE
+    if name == "riscos":
+        # On RISC OS, all env access goes through getenv and putenv
+        from riscosenviron import _Environ
+    elif name in ('os2', 'nt', 'dos'):  # Where Env Var Names Must Be UPPERCASE
         # But we store them as upper case
         class _Environ(UserDict.UserDict):
             def __init__(self, environ):
Index: plat-riscos/riscosenviron.py
===================================================================
RCS file: /cvsroot/python/python/dist/src/Lib/plat-riscos/riscosenviron.py,v
retrieving revision 1.1
diff -u -r1.1 riscosenviron.py
--- plat-riscos/riscosenviron.py	2001/03/02 05:55:07	1.1
+++ plat-riscos/riscosenviron.py	2001/03/07 06:31:34
@@ -3,7 +3,7 @@
 import riscos
 
 class _Environ:
-    def __init__(self):
+    def __init__(self, initial = None):
         pass
     def __repr__(self):
         return repr(riscos.getenvdict())



From dietmar at schwertberger.de  Wed Mar  7 09:44:54 2001
From: dietmar at schwertberger.de (Dietmar Schwertberger)
Date: Wed, 7 Mar 2001 09:44:54 +0100 (GMT)
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <200103070638.f276cqj01518@mira.informatik.hu-berlin.de>
Message-ID: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>

On Wed 07 Mar, Martin v. Loewis wrote:
> > Yes, it's from me. Unfortunately a whitespace problem with me, my editor
> > and my diffutils required Guido to apply most of the patches manually...
> 
> I see. What do you think about the patch included below? It also gives
> you the default argument to os.getenv, which riscosmodule does not
> have.
Yes, looks good. Thanks.
Please don't forget to replace the 'from riscosenviron import...' statement
from the riscos section at the start of os.py with an empty 'environ' as
there is no environ in riscosmodule.c:
(The following patch also fixes a bug: 'del ce' instead of 'del riscos')

=========================================================================
*diff -c Python-200:$.Python-2/1b1.Lib.os/py SCSI::SCSI4.$.AcornC_C++.Python.!Python.Lib.os/py 
*** Python-200:$.Python-2/1b1.Lib.os/py Fri Mar  2 07:04:51 2001
--- SCSI::SCSI4.$.AcornC_C++.Python.!Python.Lib.os/py Wed Mar  7 08:31:33 2001
***************
*** 160,170 ****
      import riscospath
      path = riscospath
      del riscospath
!     from riscosenviron import environ
  
      import riscos
      __all__.extend(_get_exports_list(riscos))
!     del ce
  
  else:
      raise ImportError, 'no os specific module found'
--- 160,170 ----
      import riscospath
      path = riscospath
      del riscospath
!     environ = {}
  
      import riscos
      __all__.extend(_get_exports_list(riscos))
!     del riscos
  
  else:
      raise ImportError, 'no os specific module found'
========================================================================

If you change riscosenviron.py, would you mind replacing 'setenv' with
'putenv'? It seems '__setitem__' has never been tested...


Regards,

Dietmar




From martin at loewis.home.cs.tu-berlin.de  Wed Mar  7 10:11:46 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 7 Mar 2001 10:11:46 +0100
Subject: [Python-Dev] Re: os module UserDict
In-Reply-To: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>
	(message from Dietmar Schwertberger on Wed, 7 Mar 2001 09:44:54 +0100
	(GMT))
References: <Marcel-1.53-0307084454-b492%2U@schwertberger.freenet.de>
Message-ID: <200103070911.f279Bks02780@mira.informatik.hu-berlin.de>

> Please don't forget to replace the 'from riscosenviron import...' statement
> from the riscos section at the start of os.py with an empty 'environ' as
> there is no environ in riscosmodule.c:

There used to be one in riscosenviron, which you had imported. I've
deleted the entire import (trusting that environ will be initialized
later on); and removed the riscosenviron.environ, which now only has
the _Environ class.

> (The following patch also fixes a bug: 'del ce' instead of 'del riscos')

That change was already applied (probably Guido caught the error when
editing the change in).

> If you change riscosenviron.py, would you mind replacing 'setenv' with
> 'putenv'? It seems '__setitem__' has never been tested...

Done.

Martin



From greg at cosc.canterbury.ac.nz  Thu Mar  8 05:06:20 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 08 Mar 2001 17:06:20 +1300 (NZDT)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
Message-ID: <200103080406.RAA04034@s454.cosc.canterbury.ac.nz>

Jack Jansen <jack at oratrix.nl>:

> but the other distinct case is in a module that is an 
> extension of another module. In this second case you would really want to 
> bypass this whole __all__ mechanism.
> 
> I think that the latter is a valid use case for import *, and that there 
> should be some way to get this behaviour.

How about:

  from foo import **

meaning "give me ALL the stuff in module foo, no, really,
I MEAN it" (possibly even including _ names).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Fri Mar  9 00:20:57 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 9 Mar 2001 00:20:57 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/nondist/sandbox test.txt,1.1,NONE
In-Reply-To: <E14b2wY-0005VS-00@usw-pr-cvs1.sourceforge.net>; from jackjansen@users.sourceforge.net on Thu, Mar 08, 2001 at 08:07:10AM -0800
References: <E14b2wY-0005VS-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <20010309002057.H404@xs4all.nl>

On Thu, Mar 08, 2001 at 08:07:10AM -0800, Jack Jansen wrote:

> Testing SSH access from the Mac with MacCVS Pro. It seems to work:-)

Oh boy oh boy! Does that mean you'll merge the MacPython tree into the
normal CVS tree ? Don't forget to assign the proper rights to the PSF :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at acm.org  Thu Mar  8 09:28:43 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Thu, 8 Mar 2001 03:28:43 -0500 (EST)
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
Message-ID: <15015.17083.582010.93308@localhost.localdomain>

Phil Thompson writes:
 > Any chance of the attached small patch be applied to enable weak
 > references to functions?
 > 
 > It's particularly useful for lambda functions and closes the "very last
 > loophole where a programmer can cause a PyQt script to seg fault" :)

Phil,
  Can you explain how this would help with the memory issues?  I'd
like to have a better idea of how this would make things work right.
Are there issues with the cyclic GC with respect to the Qt/KDE
bindings?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From phil at river-bank.demon.co.uk  Sat Mar 10 02:20:56 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 01:20:56 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain>
Message-ID: <3AA98178.35B0257D@river-bank.demon.co.uk>

"Fred L. Drake, Jr." wrote:
> 
> Phil Thompson writes:
>  > Any chance of the attached small patch be applied to enable weak
>  > references to functions?
>  >
>  > It's particularly useful for lambda functions and closes the "very last
>  > loophole where a programmer can cause a PyQt script to seg fault" :)
> 
> Phil,
>   Can you explain how this would help with the memory issues?  I'd
> like to have a better idea of how this would make things work right.
> Are there issues with the cyclic GC with respect to the Qt/KDE
> bindings?

Ok, some background...

Qt implements a component model for its widgets. You build applications
by sub-classing the standard widgets and then "connect" them together.
Connections are made between signals and slots - both are defined as
class methods. Connections perform the same function as callbacks in
more traditional GUI toolkits like Xt. Signals/slots have the advantage
of being type safe and the resulting component model is very powerful -
it encourages class designers to build functionally rich component
interfaces.

PyQt supports this model. It also allows slots to be any Python callable
object - usually a class method. You create a connection between a
signal and slot using the "connect" method of the QObject class (from
which all objects that have signals or slots are derived). connect()
*does not* increment the reference count of a slot that is a Python
callable object. This is a design decision - earlier versions did do
this but it almost always results in circular reference counts. The
downside is that, if the slot object no longer exists when the signal is
emitted (because the programmer has forgotten to keep a reference to the
class instance alive) then the usual result is a seg fault. These days,
this is the only way a PyQt programmer can cause a seg fault with bad
code (famous last words!). This accounts for 95% of PyQt programmer's
problem reports.

With Python v2.1, connect() creates a weak reference to the Python
callable slot. When the signal is emitted, PyQt (actually it's SIP)
finds out that the callable has disappeared and takes care not to cause
the seg fault. The problem is that v2.1 only implements weak references
for class instance methods - not for all callables.

Most of the time callables other than instance methods are fairly fixed
- they are unlikely to disappear - not many scripts start deleting
function definitions. The exception, however, is lambda functions. It is
sometimes convenient to define a slot as a lambda function in order to
bind an extra parameter to the slot. Obviously lambda functions are much
more transient than regular functions - a PyQt programmer can easily
forget to make sure a reference to the lambda function stays alive. The
patch I proposed gives the PyQt programmer the same protection for
lambda functions as Python v2.1 gives them for class instance methods.

To be honest, I don't see why weak references have been implemented as a
bolt-on module that only supports one particular object type. The thing
I most like about the Python implementation is how consistent it is.
Weak references should be implemented for every object type - even for
None - you never know when it might come in useful.

As far as cyclic GC is concerned - I've ignored it completely, nobody
has made any complaints - so it either works without any problems, or
none of my user base is using it.

Phil



From skip at mojam.com  Sat Mar 10 02:49:04 2001
From: skip at mojam.com (Skip Montanaro)
Date: Fri, 9 Mar 2001 19:49:04 -0600 (CST)
Subject: [Python-Dev] Patch for Weak References for Functions
In-Reply-To: <3AA98178.35B0257D@river-bank.demon.co.uk>
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
	<15015.17083.582010.93308@localhost.localdomain>
	<3AA98178.35B0257D@river-bank.demon.co.uk>
Message-ID: <15017.34832.44442.981293@beluga.mojam.com>

    Phil> This is a design decision - earlier versions did do this but it
    Phil> almost always results in circular reference counts. 

With cyclic GC couldn't you just let those circular reference counts occur
and rely on the GC machinery to break the cycles?  Or do you have __del__
methods? 

Skip



From paulp at ActiveState.com  Sat Mar 10 03:19:41 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 09 Mar 2001 18:19:41 -0800
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain> <3AA98178.35B0257D@river-bank.demon.co.uk>
Message-ID: <3AA98F3D.E01AD657@ActiveState.com>

Phil Thompson wrote:
> 
>...
> 
> To be honest, I don't see why weak references have been implemented as a
> bolt-on module that only supports one particular object type. The thing
> I most like about the Python implementation is how consistent it is.
> Weak references should be implemented for every object type - even for
> None - you never know when it might come in useful.

Weak references add a pointer to each object. This could add up for
(e.g.) integers. The idea is that you only pay the cost of weak
references for objects that you would actually create weak references
to.

-- 
Python:
    Programming the way
    Guido
    indented it.



From sales at tooltoad.com  Sat Mar 10 06:21:28 2001
From: sales at tooltoad.com (www.tooltoad.com)
Date: Sat, 10 Mar 2001 00:21:28 -0500
Subject: [Python-Dev] GRAND OPENING     www.tooltoad.com     GRAND OPENING
Message-ID: <0G9Y00LRDUP62P@mta6.srv.hcvlny.cv.net>

www.tooltoad.com      www.tooltoad.com     www.tooltoad.com

HELLO ,
  
    Please visit us at the GRAND OPENING of www.tooltoad.com

Come and see our ROCK BOTTOM pennies on the dollar pricing . We sell 

electronics , housewares  , security items , tools , and much more . 



    			THANK YOU 
				The management





From phil at river-bank.demon.co.uk  Sat Mar 10 12:06:13 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 11:06:13 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk> <15015.17083.582010.93308@localhost.localdomain> <3AA98178.35B0257D@river-bank.demon.co.uk> <3AA98F3D.E01AD657@ActiveState.com>
Message-ID: <3AAA0AA5.1E6983C2@river-bank.demon.co.uk>

Paul Prescod wrote:
> 
> Phil Thompson wrote:
> >
> >...
> >
> > To be honest, I don't see why weak references have been implemented as a
> > bolt-on module that only supports one particular object type. The thing
> > I most like about the Python implementation is how consistent it is.
> > Weak references should be implemented for every object type - even for
> > None - you never know when it might come in useful.
> 
> Weak references add a pointer to each object. This could add up for
> (e.g.) integers. The idea is that you only pay the cost of weak
> references for objects that you would actually create weak references
> to.

Yes I know, and I'm suggesting that people will always find extra uses
for things which the original designers hadn't thought of. Better to be
consistent (and allow weak references to anything) than try and
anticipate (wrongly) how people might want to use it in the future -
although I appreciate that the implementation cost might be too high.
Perhaps the question should be "what types make no sense with weak
references" and exclude them rather than "what types might be able to
use weak references" and include them.

Having said that, my only immediate requirement is to allow weak
refences to functions, and I'd be happy if only that was implemented.

Phil



From phil at river-bank.demon.co.uk  Sat Mar 10 12:06:07 2001
From: phil at river-bank.demon.co.uk (Phil Thompson)
Date: Sat, 10 Mar 2001 11:06:07 +0000
Subject: [Python-Dev] Patch for Weak References for Functions
References: <3AA3BB91.8193ACBA@river-bank.demon.co.uk>
			<15015.17083.582010.93308@localhost.localdomain>
			<3AA98178.35B0257D@river-bank.demon.co.uk> <15017.34832.44442.981293@beluga.mojam.com>
Message-ID: <3AAA0A9F.FBDE0719@river-bank.demon.co.uk>

Skip Montanaro wrote:
> 
>     Phil> This is a design decision - earlier versions did do this but it
>     Phil> almost always results in circular reference counts.
> 
> With cyclic GC couldn't you just let those circular reference counts occur
> and rely on the GC machinery to break the cycles?  Or do you have __del__
> methods?

Remember I'm ignorant when it comes to cyclic GC - PyQt is older and I
didn't pay much attention to it when it was introduced, so I may be
missing a trick. One thing though, if you have a dialog display and have
a circular reference to it, then you del() the dialog instance - when
will the GC actually get around to resolving the circular reference and
removing the dialog from the screen? It must be guaranteed to do so
before the Qt event loop is re-entered.

Every PyQt class has a __del__ method (because I need to control the
order in which instance "variables" are deleted).

Phil



From guido at digicool.com  Sat Mar 10 21:08:25 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 10 Mar 2001 15:08:25 -0500
Subject: [Python-Dev] Looking for a (paid) reviewer of Python code
Message-ID: <200103102008.PAA05543@cj20424-a.reston1.va.home.com>

I received the mail below; apparently Mr. Amon's problem is that he
needs someone to review a Python program that he ordered written
before he pays the programmer.  Mr. Amon will pay for the review and
has given me permission to forward his message here.  Please write
him at <lramon at earthlink.net>.

--Guido van Rossum (home page: http://www.python.org/~guido/)

------- Forwarded Message

Date:    Wed, 07 Mar 2001 10:58:04 -0500
From:    "Larry Amon" <lramon at earthlink.net>
To:      <guido at python.org>
Subject: Python programs

Hi Guido,

    My name is Larry Amon and I am the President/CEO of SurveyGenie.com. We
have had a relationship with a programmer at Harvard who has been using
Python as his programming language of choice. He tells us that he has this
valuable program that he has developed in Python. Our problem is that we
don't know anyone who knows Python that would be able to verify his claim.
We have funded this guy with our own hard earned money and now he is holding
his program hostage. He is willing to make a deal, but we need to know if
his program is worth anything.

    Do you have any suggestions? You can reach me at lramon at earthlink.net or
you can call me at 941 593 8250.


Regards
Larry Amon
CEO SurveyGenie.com

------- End of Forwarded Message




From pedroni at inf.ethz.ch  Sun Mar 11 03:11:34 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 11 Mar 2001 03:11:34 +0100
Subject: [Python-Dev] nested scopes and global: some corner cases
Message-ID: <005c01c0a9d0$99ff21e0$ae5821c0@newmexico>

Hi.

Writing nested scopes support for jython (now it passes test_scope and
test_future <wink>),
I have come across these further corner cases for nested scopes mixed with
global decl,
I have tried them with python 2.1b1 and I wonder if the results are consistent
with
the proposed rule:
a free variable is bound according to the nearest outer scope binding
(assign-like or global decl),
class scopes (for backw-comp) are ignored wrt this.

(I)
from __future__ import nested_scopes

x='top'
def ta():
 global x
 def tata():
  exec "x=1" in locals()
  return x # LOAD_NAME
 return tata

print ta()() prints 1, I believed it should print 'top' and a LOAD_GLOBAL
should have been produced.
In this case the global binding is somehow ignored. Note: putting a global decl
in tata xor removing
the exec make tata deliver 'top' as I expected (LOAD_GLOBALs are emitted).
Is this a bug or I'm missing something?

(II)
from __future__ import nested_scopes

x='top'
def ta():
    x='ta'
    class A:
        global x
        def tata(self):
            return x # LOAD_GLOBAL
    return A

print ta()().tata() # -> 'top'

should not the global decl in class scope be ignored and so x be bound to x in
ta,
resulting in 'ta' as output? If one substitutes global x with x='A' that's what
happens.
Or only local binding in class scope should be ignored but global decl not?

regards, Samuele Pedroni




From tim.one at home.com  Sun Mar 11 06:16:38 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 11 Mar 2001 00:16:38 -0500
Subject: [Python-Dev] nested scopes and global: some corner cases
In-Reply-To: <005c01c0a9d0$99ff21e0$ae5821c0@newmexico>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>

[Samuele Pedroni]
> ...
> I have tried them with python 2.1b1 and I wonder if the results
> are consistent with the proposed rule:
> a free variable is bound according to the nearest outer scope binding
> (assign-like or global decl),
> class scopes (for backw-comp) are ignored wrt this.

"exec" and "import*" always complicate everything, though.

> (I)
> from __future__ import nested_scopes
>
> x='top'
> def ta():
>  global x
>  def tata():
>   exec "x=1" in locals()
>   return x # LOAD_NAME
>  return tata
>
> print ta()() prints 1, I believed it should print 'top' and a
> LOAD_GLOBAL should have been produced.

I doubt this will change.  In the presence of exec, the compiler has no idea
what's local anymore, so it deliberately generates LOAD_NAME.  When Guido
says he intends to "deprecate" exec-without-in, he should also always say
"and also deprecate exec in locals()/global() too".  But he'll have to think
about that and get back to you <wink>.

Note that modifications to locals() already have undefined behavior
(according to the Ref Man), so exec-in-locals() is undefined too if the
exec'ed code tries to (re)bind any names.

> In this case the global binding is somehow ignored. Note: putting
> a global decl in tata xor removing the exec make tata deliver 'top' as
> I expected (LOAD_GLOBALs are emitted).
> Is this a bug or I'm missing something?

It's an accident either way (IMO), so it's a bug either way too -- or a
feature either way.  It's basically senseless!  What you're missing is the
layers of hackery in support of exec even before 2.1; this "give up on static
identification of locals entirely in the presence of exec" goes back many
years.

> (II)
> from __future__ import nested_scopes

> x='top'
> def ta():
>     x='ta'
>     class A:
>         global x
>         def tata(self):
>             return x # LOAD_GLOBAL
>     return A
>
> print ta()().tata() # -> 'top'
>
> should not the global decl in class scope be ignored and so x be
> bound to x in ta, resulting in 'ta' as output?

Yes, this one is clearly a bug.  Good catch!




From moshez at zadka.site.co.il  Sun Mar 11 16:19:44 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sun, 11 Mar 2001 17:19:44 +0200 (IST)
Subject: [Python-Dev] Numeric PEPs
Message-ID: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>

Trying once again for the sought after position of "most PEPs on the
planet", here are 3 new PEPs as discussed on the DevDay. These PEPs
are in a large way, taking apart the existing PEP-0228, which served
its strawman (or pie-in-the-sky) purpose well.

Note that according to PEP 0001, the discussion now should be focused
on whether these should be official PEPs, not whether these are to
be accepted. If we decide that these PEPs are good enough to be PEPs
Barry should check them in, fix the internal references between them.
I would also appreciate setting a non-Yahoo list (either SF or python.org)
to discuss those issues -- I'd rather discussion will be there rather
then in my mailbox -- I had bad experience regarding that with PEP-0228.

(See Barry? "send a draft" isn't that scary. Bet you don't like me to
tell other people about it, huh?)

PEP: XXX
Title: Unifying Long Integers and Integers
Version: $Revision$
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Python has both integers, machine word size integral types, and long 
    integers, unbounded integral types. When integers operations overflow,
    the machine registers, they raise an error. This proposes to do away
    with the distinction, and unify the types from the prespective of both
    the Python interpreter, and the C API.

Rationale

    Having the machine word size leak to the language hinders portability
    (for examples, .pyc's are not portable because of that). Many programs
    find a need to deal with larger numbers after the fact, and changing the
    algorithms later is not only bothersome, but hinders performance on the
    normal case.

Literals

    A trailing 'L' at the end of an integer literal will stop having any
    meaning, and will be eventually phased out. This will be done using
    warnings when encountering such literals. The warning will be off by
    default in Python 2.2, on by default for two revisions, and then will
    no longer be supported.

Builtin Functions

    The function long will call the function int, issuing a warning. The
    warning will be off in 2.2, and on for two revisions before removing
    the function. A FAQ will be added that if there are old modules needing
    this then

         long=int

    At the top would solve this, or

         import __builtin__
         __builtin__.long=int

    In site.py.

C API

    All PyLong_AsX will call PyInt_AsX. If PyInt_AsX does not exist, it will
    be added. Similarly PyLong_FromX. A similar path of warnings as for the
    Python builtins followed.


Overflows

    When an arithmetic operation on two numbers whose internal representation 
    is as a machine-level integers returns something whose internal 
    representation is a bignum, a warning which is turned off by default will
    be issued. This is only a debugging aid, and has no guaranteed semantics.

Implementation

    The PyInt type's slot for a C long will be turned into a 

           union {
               long i;
               digit digits[1];
           };

    Only the n-1 lower bits of the long have any meaning, the top bit is always
    set. This distinguishes the union. All PyInt functions will check this bit
    before deciding which types of operations to use.

Jython Issues

    Jython will have a PyInt interface which is implemented by both from 
    PyFixNum and PyBigNum.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

==========================================
PEP: XXX
Title: Non-integer Division
Version: $Revision$
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Dividing integers returns the floor of the quantities. This behaviour
    is known as integer division, and is similar to what C and FORTRAN do.
    This has the useful property that all operations on integers return
    integers, but it does tend to put a hump in the learning curve when
    new programmers are surprised that

                  1/2 == 0

    This proposal shows a way to change this will keeping backward 
    compatability issues in mind.

Rationale

    The behaviour of integer division is a major stumbling block found in
    user testing of Python. This manages to trip up new programmers 
    regularily and even causes the experienced programmer to make the
    occasional bugs. The work arounds, like explicitly coerce one of the
    operands to float or use a non-integer literal, are very non-intuitive
    and lower the readability of the program.

// Operator

    A '//' operator which will be introduced, which will call the nb_intdivide
    or __intdiv__ slots. This operator will be implemented in all the Python
    numeric types, and will have the semantics of

                 a // b == floor(a/b)

    Except that the type of a//b will be the type a and b will be coerced
    into (specifically, if a and b are of the same type, a//b will be of that
    type too).

Changing the Semantics of the / Operator

    The nb_divide slot on integers (and long integers, if these are a seperate
    type) will issue a warning when given integers a and b such that

                  a % b != 0

    The warning will be off by default in the 2.2 release, and on by default
    for in the next Python release, and will stay in effect for 24 months.
    The next Python release after 24 months, it will implement

                  (a/b) * b = a (more or less)

    The type of a/b will be either a float or a rational, depending on other
    PEPs.

__future__

    A special opcode, FUTURE_DIV will be added that does the equivalent
    of

        if type(a) in (types.IntType, types.LongType):
             if type(b) in (types.IntType, types.LongType):
                 if a % b != 0:
                      return float(a)/b
        return a/b

    (or rational(a)/b, depending on whether 0.5 is rational or float)

    If "from __future__ import non_integer_division" is present in the
    releases until the IntType nb_divide is changed, the "/" operator is
    compiled to FUTURE_DIV

Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

====================================
PEP: XXX
Title: Adding a Rational Type to Python
Version: $Revision$
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Python-Version: 2.2
Type: Standards Track
Created: 11-Mar-2001
Post-History:


Abstract

    Python has no number type whose semantics are that of a unboundedly
    precise rational number. This proposal explains the semantics of such
    a type, and suggests builtin functions and literals to support such
    a type. In addition, if division of integers would return a non-integer,
    it could also return a rational type.

Rationale

    While sometimes slower and more memory intensive (in general, unboundedly
    so) rational arithmetic captures more closely the mathematical ideal of
    numbers, and tends to have behaviour which is less surprising to newbies,

RationalType

    This will be a numeric type. The unary operators will do the obvious thing.
    Binary operators will coerce integers and long integers to rationals, and
    rationals to floats and complexes.

    The following attributes will be supported: .numerator, .denominator.
    The language definition will not define other then that

           r.denominator * r == r.numerator

    In particular, no guarantees are made regarding the GCD or the sign of
    the denominator, even though in the proposed implementation, the GCD is
    always 1 and the denominator is always positive.

    The method r.trim(max_denominator) will return the closest rational s to
    r such that abs(s.denominator) <= max_denominator.

The rational() Builtin

    This function will have the signature rational(n, d=1). n and d must both
    be integers, long integers or rationals. A guarantee is made that

            rational(n, d) * d == n

Literals

    Literals conforming to the RE '\d*.\d*' will be rational numbers.

Backwards Compatability

    The only backwards compatible issue is the type of literals mentioned
    above. The following migration is suggested:

    1. from __future__ import rational_literals will cause all such literals
       to be treated as rational numbers.
    2. Python 2.2 will have a warning, turned off by default, about such 
       literals in the absence of such an __future__. The warning message
       will contain information about the __future__ statement, and that
       to get floating point literals, they should be suffixed with "e0".
    3. Python 2.3 will have the warning turned on by default. This warning will
       stay in place for 24 months, at which time the literals will be rationals
       and the warning will be removed.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From pedroni at inf.ethz.ch  Sun Mar 11 17:17:38 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 11 Mar 2001 17:17:38 +0100
Subject: [Python-Dev] nested scopes and global: some corner cases
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>
Message-ID: <001b01c0aa46$d3dbbd80$f979fea9@newmexico>

Hi.

[Tim Peters on
from __future__ import nested_scopes

x='top'
def ta():
  global x
  def tata():
   exec "x=1" in locals()
   return x # LOAD_NAME vs LOAD_GLOBAL?
  return tata

 print ta()() # 1 vs. 'top' ?
]
-- snip --
> It's an accident either way (IMO), so it's a bug either way too -- or a
> feature either way.  It's basically senseless!  What you're missing is the
> layers of hackery in support of exec even before 2.1; this "give up on static
> identification of locals entirely in the presence of exec" goes back many
> years.
(Just a joke) I'm not such a "newbie" that the guess I'm missing something
is right with probability > .5. At least I hope so.
The same hackery is there in jython codebase
and I have taken much care in preserving it <wink>.

The point is simply that 'exec in locals()' is like a bare exec
but it has been decided to allow 'exec in' even in presence
of nested scopes and we cannot detect the 'locals()' special case
(at compile time) because in python 'locals' is the builtin only with
high probability.

So we face the problem, how to *implement* an undefined behaviour,
(the ref says that changing locals is undef,: everybody knows)
that historically has never been to seg fault, in the new (nested scopes)
context? It also true that what we are doing is "impossible", that's why
it has been decided to raise a SyntaxError in the bare exec case <wink>.

To be honest, I have just implemented things in jython my/some way, and then
discovered that jython CVS version and python 21.b1 (here) behave
differently. A posteriori I just tried to solve/explain things using
the old problem pattern: I give you a (number) sequence, guess the next
term:

the sequence is: (over this jython and python agree)

from __future__ import nested_scopes

def a():
 exec "x=1" in locals()
 return x # LOAD_NAME (jython does the equivalent)

def b():
  global x
  exec "x=1" in locals()
  return x # LOAD_GLOBAL

def c():
 global x
 def cc(): return x # LOAD_GLOBAL
 return cc

def d():
 x='d'
 def dd():
   exec "x=1" in locals() # without 'in locals()' => SynError
   return x # LOAD_DEREF (x in d)
 return dd

and then the term to guess:

def z():
 global x
 def zz():
  exec "x=1" in locals() # without 'in locals()' => SynError
  return x # ???? python guesses LOAD_NAME, jython the equiv of LOAD_GLOBAL
 return zz

Should python and jython agree here too? Anybody wants to spend some time
convincing me that I should change jython meaning of undefined?
I will not spend more time to do the converse <wink>.

regards, Samuele Pedroni.

PS: It is also possible that trying to solve pdb+nested scopes problem we will
have to consider the grab the locals problem with more care.




From paulp at ActiveState.com  Sun Mar 11 20:15:11 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 11:15:11 -0800
Subject: [Python-Dev] mail.python.org down?
Message-ID: <3AABCEBF.1FEC1F9D@ActiveState.com>

>>> urllib.urlopen("http://mail.python.org")
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
  File "c:\python20\lib\urllib.py", line 61, in urlopen
    return _urlopener.open(url)
  File "c:\python20\lib\urllib.py", line 166, in open
    return getattr(self, name)(url)
  File "c:\python20\lib\urllib.py", line 273, in open_http
    h.putrequest('GET', selector)
  File "c:\python20\lib\httplib.py", line 425, in putrequest
    self.send(str)
  File "c:\python20\lib\httplib.py", line 367, in send
    self.connect()
  File "c:\python20\lib\httplib.py", line 351, in connect
    self.sock.connect((self.host, self.port))
  File "<string>", line 1, in connect
IOError: [Errno socket error] (10061, 'Connection refused')

-- 
Python:
    Programming the way
    Guido
    indented it.



From tim.one at home.com  Sun Mar 11 20:14:28 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 11 Mar 2001 14:14:28 -0500
Subject: [Python-Dev] Forbidden names & obmalloc.c
Message-ID: <LNBBLJKPBEHFEDALKOLCOEOCJEAA.tim.one@home.com>

In std C, all identifiers that begin with an underscore and are followed by
an underscore or uppercase letter are reserved for the platform C
implementation.  obmalloc.c violates this rule all over the place, spilling
over into objimpl.h's use of _PyCore_ObjectMalloc. _PyCore_ObjectRealloc, and
_PyCore_ObjectFree.  The leading "_Py" there *probably* leaves them safe
despite being forbidden, but things like obmalloc.c's _SYSTEM_MALLOC and
_SET_HOOKS are going to bite us sooner or later (hard to say, but they may
have already, in bug #407680).

I renamed a few of the offending vrbl names, but I don't understand the
intent of the multiple layers of macros in this subsystem.  If anyone else
believes they do, please rename these suckers before the bad names get out
into the world and we have to break user code to repair eventual conflicts
with platforms' uses of these (reserved!) names.




From guido at digicool.com  Sun Mar 11 22:37:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 16:37:14 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: Your message of "Sun, 11 Mar 2001 00:16:38 EST."
             <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com> 
Message-ID: <200103112137.QAA13084@cj20424-a.reston1.va.home.com>

> When Guido
> says he intends to "deprecate" exec-without-in, he should also always say
> "and also deprecate exec in locals()/global() too".  But he'll have to think
> about that and get back to you <wink>.

Actually, I intend to deprecate locals().  For now, globals() are
fine.  I also intend to deprecate vars(), at least in the form that is
equivalent to locals().

> Note that modifications to locals() already have undefined behavior
> (according to the Ref Man), so exec-in-locals() is undefined too if the
> exec'ed code tries to (re)bind any names.

And that's the basis for deprecating it.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Sun Mar 11 23:28:29 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 17:28:29 -0500
Subject: [Python-Dev] mail.python.org down?
In-Reply-To: Your message of "Sun, 11 Mar 2001 11:15:11 PST."
             <3AABCEBF.1FEC1F9D@ActiveState.com> 
References: <3AABCEBF.1FEC1F9D@ActiveState.com> 
Message-ID: <200103112228.RAA13919@cj20424-a.reston1.va.home.com>

> >>> urllib.urlopen("http://mail.python.org")
> Traceback (most recent call last):
>   File "<stdin>", line 1, in ?
>   File "c:\python20\lib\urllib.py", line 61, in urlopen
>     return _urlopener.open(url)
>   File "c:\python20\lib\urllib.py", line 166, in open
>     return getattr(self, name)(url)
>   File "c:\python20\lib\urllib.py", line 273, in open_http
>     h.putrequest('GET', selector)
>   File "c:\python20\lib\httplib.py", line 425, in putrequest
>     self.send(str)
>   File "c:\python20\lib\httplib.py", line 367, in send
>     self.connect()
>   File "c:\python20\lib\httplib.py", line 351, in connect
>     self.sock.connect((self.host, self.port))
>   File "<string>", line 1, in connect
> IOError: [Errno socket error] (10061, 'Connection refused')

Beats me.  Indeed it is down.  I've notified the folks at DC
responsible for the site.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Mon Mar 12 00:15:38 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 15:15:38 -0800
Subject: [Python-Dev] mail.python.org down?
References: <3AABCEBF.1FEC1F9D@ActiveState.com> <200103112228.RAA13919@cj20424-a.reston1.va.home.com>
Message-ID: <3AAC071A.799A8B50@ActiveState.com>

Guido van Rossum wrote:
> 
>...
> 
> Beats me.  Indeed it is down.  I've notified the folks at DC
> responsible for the site.

It is fixed now. Thanks!

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From paulp at ActiveState.com  Mon Mar 12 00:23:07 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 11 Mar 2001 15:23:07 -0800
Subject: [Python-Dev] Revive the types sig?
Message-ID: <3AAC08DB.9D4E96B4@ActiveState.com>

I have been involved with the types-sig for a long time and it has
consumed countless hours out of the lives of many brilliant people. I
strongly believe that it will only ever work if we change some of
fundamental assumptions, goals and procedures. At next year's
conference, I do not want to be at the same place in the discussion that
we were this year, and last year, and the year before. The last time I
thought I could make progress through sheer effort. All that did was
burn me out and stress out my wife. We've got to work smarter, not
harder.

The first thing we need to adjust is our terminology and goals. I think
that we should design a *parameter type annotation* system that will
lead directly to better error checking *at runtime*, better
documentation, better development environments an so forth. Checking
types *at compile time* should be considered a tools issue that can be
solved by separate tools. I'm not going to say that Python will NEVER
have a static type checking system but I would say that that shouldn't
be a primary goal.

I've reversed my opinion on this issue. Hey, even Guido makes mistakes.

I think that if the types-sig is going to come up with something
useful this time, we must observe a few principles that have proven
useful in developing Python:

1. Incremental development is okay. You do not need the end-goal in
mind before you begin work. Python today is very different than it was
when it was first developed (not as radically different than some
languages, but still different).

2. It is not necessary to get everything right. Python has some warts.
Some are easier to remove than others but they can all be removed
eventually. We have to get a type system done, test it out, and then
maybe we have to remove the warts. We may not design a perfect gem from
the start. Perfection is a goal, not a requirement.

3. Whatever feature you believe is absolutely necessary to a decent
type system probably is not. There are no right or wrong answers,
only features that work better or worse than other features.

It is important to understand that a dynamically-checked type
annotation system is just a replacement for assertions. Anything that
cannot be expressed in the type system CAN be expressed through
assertions.

For instance one person might claim that the type system needs to
differentiate between 32 bit integers and 64 bit integers. But if we
do not allow that differentiation directly in the type system, they
could do that in assertions. C'est la vie.

This is not unique to Python.  Languages like C++ and Java also have
type test and type assertion operators to "work around" the
limitations of their type systems. If people who have spent their
entire lives inventing static type checking systems cannot come up
with systems that are 100% "complete" then we in the Python world
should not even try. There is nothing wrong with using assertions for
advanced type checks. 

For instance, if you try to come up with a type system that can define
the type of "map" you will probably come up with something so
complicated that it will never be agreed upon or implemented.
(Python's map is much harder to type-declare than that of functional
languages because the function passed in must handle exactly as many
arguments as the unbounded number of sequences that are passed as
arguments to map.)

Even if we took an extreme position and ONLY allowed type annotations
for basic types like strings, numbers and sequences, Python would 
still be a better language. There are thousands of instances of these 
types in the standard library. If we can improve the error checking 
and documentation of these methods we have improved on the status 
quo. Adding type annotations for the other parameters could wait 
for another day.

----

In particular there are three features that have always exploded into
unending debates in the past. I claim that they should temporarily be
set aside while we work out the basics.

 A) Parameterized types (or templates): 

Parameterized types always cause the discussion to spin out of control
as we discuss levels and types of
parameterizability. A type system can be very useful with
parameterization. For instance, Python itself is written in C. C has no
parameterizability. Yet C is obviously still very useful (and simple!).
Java also does not yet have parameterized types and yet it is the most
rapidly growing statically typed programming language!

It is also important to note that parameterized types are much, much
more important in a language that "claims" to catch most or all type
errors at compile time. Python will probably never make that claim.
If you want to do a more sophisticated type check than Python allows,
you should do that in an assertion:

assert Btree.containsType(String)

Once the basic type system is in place, we can discuss the importance
of parameterized types separately later. Once we have attempted to use
Python without them, we will understand our needs better. The type
system should not prohibit the addition of parameterized types in the
future. 

A person could make a strong argument for allowing parameterization
only of basic types ("list of string", "tuple of integers") but I
think that we could even postpone this for the future.

 B) Static type checking: 

Static type warnings are important and we want to enable the development
of tools that will detect type errors before applications are shipped.
Nevertheless, we should not attempt to define a static type checking
system for Python at this point. That may happen in the future or never.

Unlike Java or C++, we should not require the Python interpreter
itself to ever reject code that "might be" type incorrect. Other tools
such as linters and IDEs should handle these forms of whole-program
type-checks.  Rather than defining the behavior of these tools in
advance, we should leave that as a quality of implementation issue for
now.

We might decide to add a formally-defined static type checking to
Python in the future. Dynamically checked annotations give us a
starting point. Once again, I think that the type system should be
defined so that annotations could be used as part of a static type
checking system in the future, should we decide that we want one.

 C) Attribute-value and variable declarations: 

In traditional static type checking systems, it is very important to
declare the type for attributes in a class and variables in a function. 

This feature is useful but it is fairly separable. I believe it should
wait because it brings up a bunch of issues such as read-only
attributes, cross-boundary assignment checks and so forth.

I propose that the first go-round of the types-sig should ONLY address
the issue of function signatures.

Let's discuss my proposal in the types-sig. Executive summary:

 * incremental development policy
 * syntax for parameter type declarations
 * syntax for return type declarations
 * optional runtime type checking
 * goals are better runtime error reporting and method documentation

Deferred for future versions (or never):

 * compile-time type checking
 * parameterized types
 * declarations for variables and attributes

http://www.python.org/sigs/types-sig/

-- 
Python:
    Programming the way
    Guido
    indented it.



From guido at digicool.com  Mon Mar 12 00:25:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:25:13 -0500
Subject: [Python-Dev] Unifying Long Integers and Integers
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
             <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>

(I'm splitting this in separate replies per PEP, to focus the
discussion a bit.)

> Trying once again for the sought after position of "most PEPs on the
> planet", here are 3 new PEPs as discussed on the DevDay. These PEPs
> are in a large way, taking apart the existing PEP-0228, which served
> its strawman (or pie-in-the-sky) purpose well.
> 
> Note that according to PEP 0001, the discussion now should be focused
> on whether these should be official PEPs, not whether these are to
> be accepted. If we decide that these PEPs are good enough to be PEPs
> Barry should check them in, fix the internal references between them.

Actually, since you have SF checkin permissions, Barry can just give
you a PEP number and you can check it in yourself!

> I would also appreciate setting a non-Yahoo list (either SF or
> python.org) to discuss those issues -- I'd rather discussion will be
> there rather then in my mailbox -- I had bad experience regarding
> that with PEP-0228.

Please help yourself.  I recommend using SF since it requires less
overhead for the poor python.org sysadmins.

> (See Barry? "send a draft" isn't that scary. Bet you don't like me
> to tell other people about it, huh?)

What was that about?

> PEP: XXX
> Title: Unifying Long Integers and Integers
> Version: $Revision$
> Author: pep at zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Python has both integers, machine word size integral types, and
>     long integers, unbounded integral types. When integers
>     operations overflow, the machine registers, they raise an
>     error. This proposes to do away with the distinction, and unify
>     the types from the prespective of both the Python interpreter,
>     and the C API.
> 
> Rationale
> 
>     Having the machine word size leak to the language hinders
>     portability (for examples, .pyc's are not portable because of
>     that). Many programs find a need to deal with larger numbers
>     after the fact, and changing the algorithms later is not only
>     bothersome, but hinders performance on the normal case.

I'm not sure if the portability of .pyc's is much worse than that of
.py files.  As long as you don't use plain ints >= 2**31 both are 100%
portable.  *programs* can of course become non-portable, but the true
reason for the change is simply that the distinction is arbitrary and
irrelevant.

> Literals
> 
>     A trailing 'L' at the end of an integer literal will stop having
>     any meaning, and will be eventually phased out. This will be
>     done using warnings when encountering such literals. The warning
>     will be off by default in Python 2.2, on by default for two
>     revisions, and then will no longer be supported.

Please suggested a more explicit schedule for introduction, with
approximate dates.  You can assume there will be roughly one 2.x
release every 6 months.

> Builtin Functions
> 
>     The function long will call the function int, issuing a
>     warning. The warning will be off in 2.2, and on for two
>     revisions before removing the function. A FAQ will be added that
>     if there are old modules needing this then
> 
>          long=int
> 
>     At the top would solve this, or
> 
>          import __builtin__
>          __builtin__.long=int
> 
>     In site.py.

There's more to it than that.  What about sys.maxint?  What should it
be set to?  We've got to pick *some* value because there's old code
that uses it.  (An additional problem here is that it's not easy to
issue warnings for using a particular constant.)

Other areas where we need to decide what to do: there are a few
operations that treat plain ints as unsigned: hex() and oct(), and the
format operators "%u", "%o" and "%x".  These have different semantics
for bignums!  (There they ignore the request for unsignedness and
return a signed representation anyway.)

There may be more -- the PEP should strive to eventually list all
issues, although of course it neededn't be complete at first checkin.

> C API
> 
>     All PyLong_AsX will call PyInt_AsX. If PyInt_AsX does not exist,
>     it will be added. Similarly PyLong_FromX. A similar path of
>     warnings as for the Python builtins followed.

May C APIs for other datatypes currently take int or long arguments,
e.g. list indexing and slicing.  I suppose these could stay the same,
or should we provide ways to use longer integers from C as well?

Also, what will you do about PyInt_AS_LONG()?  If PyInt_Check()
returns true for bignums, C code that uses PyInt_Check() and then
assumes that PyInt_AS_LONG() will return a valid outcome is in for a
big surprise!  I'm afraid that we will need to think through the
compatibility strategy for C code more.

> Overflows
> 
>     When an arithmetic operation on two numbers whose internal
>     representation is as a machine-level integers returns something
>     whose internal representation is a bignum, a warning which is
>     turned off by default will be issued. This is only a debugging
>     aid, and has no guaranteed semantics.

Note that the implementation suggested below implies that the overflow
boundary is at a different value than currently -- you take one bit
away from the long.  For backwards compatibility I think that may be
bad...

> Implementation
> 
>     The PyInt type's slot for a C long will be turned into a 
> 
>            union {
>                long i;
>                digit digits[1];
>            };

Almost.  The current bignum implementation actually has a length field
first.

I have an alternative implementation in mind where the type field is
actually different for machine ints and bignums.  Then the existing
int representation can stay, and we lose no bits.  This may have other
implications though, since uses of type(x) == type(1) will be broken.
Once the type/class unification is complete, this could be solved by
making long a subtype of int.

>     Only the n-1 lower bits of the long have any meaning, the top
>     bit is always set. This distinguishes the union. All PyInt
>     functions will check this bit before deciding which types of
>     operations to use.

See above. :-(

> Jython Issues
> 
>     Jython will have a PyInt interface which is implemented by both
>     from PyFixNum and PyBigNum.
> 
> 
> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

All in all, a good start, but needs some work, Moshe!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 00:37:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:37:37 -0500
Subject: [Python-Dev] Non-integer Division
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
             <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>

Good start, Moshe!  Some comments below.

> PEP: XXX
> Title: Non-integer Division
> Version: $Revision$
> Author: pep at zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Dividing integers returns the floor of the quantities. This
>     behaviour is known as integer division, and is similar to what C
>     and FORTRAN do.  This has the useful property that all
>     operations on integers return integers, but it does tend to put
>     a hump in the learning curve when new programmers are surprised
>     that
> 
>                   1/2 == 0
> 
>     This proposal shows a way to change this will keeping backward 
>     compatability issues in mind.
> 
> Rationale
> 
>     The behaviour of integer division is a major stumbling block
>     found in user testing of Python. This manages to trip up new
>     programmers regularily and even causes the experienced
>     programmer to make the occasional bugs. The work arounds, like
>     explicitly coerce one of the operands to float or use a
>     non-integer literal, are very non-intuitive and lower the
>     readability of the program.

There is a specific kind of example that shows why this is bad.
Python's polymorphism and treatment of mixed-mode arithmetic
(e.g. int+float => float) suggests that functions taking float
arguments and doing some math on them should also be callable with int
arguments.  But sometimes that doesn't work.  For example, in
electronics, Ohm's law suggests that current (I) equals voltage (U)
divided by resistance (R).  So here's a function to calculate the
current:

    >>> def I(U, R):
    ...     return U/R
    ...
    >>> print I(110, 100) # Current through a 100 Ohm resistor at 110 Volt
    1
    >>> 

This answer is wrong! It should be 1.1.  While there's a work-around
(return 1.0*U/R), it's ugly, and moreover because no exception is
raised, simple code testing may not reveal the bug.  I've seen this
reported many times.

> // Operator

Note: we could wind up using a different way to spell this operator,
e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
introduces a new reserved word, with all the issues it creates.  The
disadvantage of '//' is that it means something very different to Java
and C++ users.

>     A '//' operator which will be introduced, which will call the
>     nb_intdivide or __intdiv__ slots. This operator will be
>     implemented in all the Python numeric types, and will have the
>     semantics of
> 
>                  a // b == floor(a/b)
> 
>     Except that the type of a//b will be the type a and b will be
>     coerced into (specifically, if a and b are of the same type,
>     a//b will be of that type too).
> 
> Changing the Semantics of the / Operator
> 
>     The nb_divide slot on integers (and long integers, if these are
>     a seperate type) will issue a warning when given integers a and
>     b such that
> 
>                   a % b != 0
> 
>     The warning will be off by default in the 2.2 release, and on by
>     default for in the next Python release, and will stay in effect
>     for 24 months.  The next Python release after 24 months, it will
>     implement
> 
>                   (a/b) * b = a (more or less)
> 
>     The type of a/b will be either a float or a rational, depending
>     on other PEPs.
> 
> __future__
> 
>     A special opcode, FUTURE_DIV will be added that does the equivalent

Maybe for compatibility of bytecode files we should come up with a
better name, e.g. FLOAT_DIV?

>     of
> 
>         if type(a) in (types.IntType, types.LongType):
>              if type(b) in (types.IntType, types.LongType):
>                  if a % b != 0:
>                       return float(a)/b
>         return a/b
> 
>     (or rational(a)/b, depending on whether 0.5 is rational or float)
> 
>     If "from __future__ import non_integer_division" is present in the
>     releases until the IntType nb_divide is changed, the "/" operator is
>     compiled to FUTURE_DIV

I find "non_integer_division" rather long.  Maybe it should be called
"float_division"?

> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 00:55:03 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 11 Mar 2001 18:55:03 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Your message of "Sun, 11 Mar 2001 17:19:44 +0200."
             <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>

Here's the third installment -- my response to Moshe's rational
numbers PEP.

I believe that a fourth PEP should be written as well: decimal
floating point.  Maybe Tim can draft this?

> PEP: XXX
> Title: Adding a Rational Type to Python
> Version: $Revision$
> Author: pep at zadka.site.co.il (Moshe Zadka)
> Status: Draft
> Python-Version: 2.2
> Type: Standards Track
> Created: 11-Mar-2001
> Post-History:
> 
> 
> Abstract
> 
>     Python has no number type whose semantics are that of a
>     unboundedly precise rational number.

But one could easily be added to the standard library, and several
implementations exist, including one in the standard distribution:
Demo/classes/Rat.py.

>     This proposal explains the
>     semantics of such a type, and suggests builtin functions and
>     literals to support such a type. In addition, if division of
>     integers would return a non-integer, it could also return a
>     rational type.

It's kind of sneaky not to mention in the abstract that this should be
the default representation for numbers containing a decimal point,
replacing most use of floats!

> Rationale
> 
>     While sometimes slower and more memory intensive (in general,
>     unboundedly so) rational arithmetic captures more closely the
>     mathematical ideal of numbers, and tends to have behaviour which
>     is less surprising to newbies,

This PEP definitely needs a section of arguments Pro and Con.  For
Con, mention at least that rational arithmetic is much slower than
floating point, and can become *very* much slower when algorithms
aren't coded carefully.  Now, naively coded algorithms often don't
work well with floats either, but there is a lot of cultural knowledge
about defensive programming with floats, which is easily accessible to
newbies -- similar information about coding with rationals is much
less easily accessible, because no mainstream languages have used
rationals before.  (I suppose Common Lisp has rationals, since it has
everything, but I doubt that it uses them by default for numbers with
a decimal point.)

> RationalType
> 
>     This will be a numeric type. The unary operators will do the
>     obvious thing.  Binary operators will coerce integers and long
>     integers to rationals, and rationals to floats and complexes.
>
>     The following attributes will be supported: .numerator,
>     .denominator.  The language definition will not define other
>     then that
> 
>            r.denominator * r == r.numerator
> 
>     In particular, no guarantees are made regarding the GCD or the
>     sign of the denominator, even though in the proposed
>     implementation, the GCD is always 1 and the denominator is
>     always positive.
>
>     The method r.trim(max_denominator) will return the closest
>     rational s to r such that abs(s.denominator) <= max_denominator.
> 
> The rational() Builtin
> 
>     This function will have the signature rational(n, d=1). n and d
>     must both be integers, long integers or rationals. A guarantee
>     is made that
> 
>             rational(n, d) * d == n
> 
> Literals
> 
>     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> 
> Backwards Compatability
> 
>     The only backwards compatible issue is the type of literals
>     mentioned above. The following migration is suggested:
> 
>     1. from __future__ import rational_literals will cause all such
>        literals to be treated as rational numbers.
>     2. Python 2.2 will have a warning, turned off by default, about
>        such literals in the absence of such an __future__. The
>        warning message will contain information about the __future__
>        statement, and that to get floating point literals, they
>        should be suffixed with "e0".
>     3. Python 2.3 will have the warning turned on by default. This
>        warning will stay in place for 24 months, at which time the
>        literals will be rationals and the warning will be removed.

There are also backwards compatibility issues at the C level.

Question: the time module's time() function currently returns a
float.  Should it return a rational instead?  This is a trick question.

> Copyright
> 
>     This document has been placed in the public domain.
> 
> 
> 
> Local Variables:
> mode: indented-text
> indent-tabs-mode: nil
> End:

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Mon Mar 12 01:25:23 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 02:25:23 +0200 (IST)
Subject: [Python-Dev] Re: Unifying Long Integers and Integers
In-Reply-To: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>
References: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido at digicool.com> wrote:

> Actually, since you have SF checkin permissions, Barry can just give
> you a PEP number and you can check it in yourself!

Technically yes. I'd rather Barry would change PEP-0000 himself ---
if he's ready to do that and let me check in the PEPs it's fine, but
I just imagined he'd like to keep the state consistent.

[re: numerical PEPs mailing list] 
> Please help yourself.  I recommend using SF since it requires less
> overhead for the poor python.org sysadmins.

Err...I can't. Requesting an SF mailing list is an admin operation.

[re: portablity of literals]
> I'm not sure if the portability of .pyc's is much worse than that of
> .py files.

Of course, .py's and .pyc's is just as portable. I do think that this
helps programs be more portable when they have literals inside them,
especially since (I believe) that soon the world will be a mixture of
32 bit and 64 bit machines.

> There's more to it than that.  What about sys.maxint?  What should it
> be set to?

I think I'd like to stuff this one "open issues" and ask people to 
grep through code searching for sys.maxint before I decide.

Grepping through the standard library shows that this is most often
use as a maximum size for sequences. So, I think it should be probably
the maximum size of an integer type large enough to hold a pointer.
(the only exception is mhlib.py, and it uses it when int(string) gives an
OverflowError, which it would stop so the code would be unreachable)

> Other areas where we need to decide what to do: there are a few
> operations that treat plain ints as unsigned: hex() and oct(), and the
> format operators "%u", "%o" and "%x".  These have different semantics
> for bignums!  (There they ignore the request for unsignedness and
> return a signed representation anyway.)

This would probably be solved by the fact that after the change 1<<31
will be positive. The real problem is that << stops having 32 bit semantics --
but it never really had those anyway, it had machine-long-size semantics,
which were unportable, so we can just people with unportable code to fix
it.

What do you think? Should I issue a warning on shifting an integer so
it would be cut/signed in the old semantics?

> May C APIs for other datatypes currently take int or long arguments,
> e.g. list indexing and slicing.  I suppose these could stay the same,
> or should we provide ways to use longer integers from C as well?

Hmmmm....I'd probably add PyInt_AS_LONG_LONG under an #ifdef HAVE_LONG_LONG

> Also, what will you do about PyInt_AS_LONG()?  If PyInt_Check()
> returns true for bignums, C code that uses PyInt_Check() and then
> assumes that PyInt_AS_LONG() will return a valid outcome is in for a
> big surprise!

Yes, that's a problem. I have no immediate solution to that -- I'll
add it to the list of open issues.

> Note that the implementation suggested below implies that the overflow
> boundary is at a different value than currently -- you take one bit
> away from the long.  For backwards compatibility I think that may be
> bad...

It also means overflow raises a different exception. Again, I suspect
it will be used only in cases where the algorithm is supposed to maintain
that internal results are not bigger then the inputs or things like that,
and there only as a debugging aid -- so I don't think that this would be this
bad. And if people want to avoid using the longs for performance reasons,
then the implementation should definitely *not* lie to them.

> Almost.  The current bignum implementation actually has a length field
> first.

My bad. ;-)

> I have an alternative implementation in mind where the type field is
> actually different for machine ints and bignums.  Then the existing
> int representation can stay, and we lose no bits.  This may have other
> implications though, since uses of type(x) == type(1) will be broken.
> Once the type/class unification is complete, this could be solved by
> making long a subtype of int.

OK, so what's the concrete advice? How about if I just said "integer operations
that previously raised OverflowError now return long integers, and literals
in programs that are too big to be integers are long integers?". I started
leaning this way when I started writing the PEP and decided that true 
unification may not be the low hanging fruit we always assumed it would be.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Mon Mar 12 01:36:58 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 02:36:58 +0200 (IST)
Subject: [Python-Dev] Re: Non-integer Division
In-Reply-To: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>
References: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312003658.01096AA27@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido at digicool.com> wrote:

> > // Operator
> 
> Note: we could wind up using a different way to spell this operator,
> e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
> introduces a new reserved word, with all the issues it creates.  The
> disadvantage of '//' is that it means something very different to Java
> and C++ users.

I have zero (0) intuition about what is better. You choose --- I have
no opinions on this. If we do go the "div" route, I need to also think
up a syntactic migration path once I figure out the parsing issues
involved. This isn't an argument -- just something you might want to 
consider before pronouncing on "div".

> Maybe for compatibility of bytecode files we should come up with a
> better name, e.g. FLOAT_DIV?

Hmmmm.....a bytecode files so far have failed to be compatible for
any revision. I have no problems with that, just that I feel that if
we're serious about comptability, we should say so, and if we're not,
then half-assed measures will not help.

[re: from __future__ import non_integer_division] 
> I find "non_integer_division" rather long.  Maybe it should be called
> "float_division"?

I have no problems with that -- except that if the rational PEP is accepted,
then this would rational_integer_division, and I didn't want to commit
myself yet.

You haven't commented yet about the rational PEP, so I don't know if that's
even an option.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Mon Mar 12 02:00:25 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 03:00:25 +0200 (IST)
Subject: [Python-Dev] Re: Adding a Rational Type to Python
In-Reply-To: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
References: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>
Message-ID: <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il>

On Sun, 11 Mar 2001, Guido van Rossum <guido at digicool.com> wrote:

> I believe that a fourth PEP should be written as well: decimal
> floating point.  Maybe Tim can draft this?

Better. I have very little decimal point experience, and in any way
I'd find it hard to write a PEP I don't believe it. However, I would
rather that it be written if only to be officially rejected, so if
no one volunteers to write it, I'm willing to do it anyway.
(Besides, I might manage to actually overtake Jeremy in number of PEPs
if I do this)

> It's kind of sneaky not to mention in the abstract that this should be
> the default representation for numbers containing a decimal point,
> replacing most use of floats!

I beg the mercy of the court. This was here, but got lost in the editing.
I've put it back.

> This PEP definitely needs a section of arguments Pro and Con.  For
> Con, mention at least that rational arithmetic is much slower than
> floating point, and can become *very* much slower when algorithms
> aren't coded carefully.

Note that I did try to help with coding carefully by adding the ".trim"
method.

> There are also backwards compatibility issues at the C level.

Hmmmmm....what are those? Very few c functions explicitly expect a
float, and the responsibility here can be pushed off to the Python
programmer by having to use explicit floats. For the others, PyArg_ParseTuple
can just coerce to float with the "d" type.

> Question: the time module's time() function currently returns a
> float.  Should it return a rational instead?  This is a trick question.

It should return the most exact number the underlying operating system
supports. For example, in OSes supporting gettimeofday, return a rational
built from tv_sec and tv_usec.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From jeremy at alum.mit.edu  Mon Mar 12 02:22:04 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Sun, 11 Mar 2001 20:22:04 -0500 (EST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <3AAC08DB.9D4E96B4@ActiveState.com>
References: <3AAC08DB.9D4E96B4@ActiveState.com>
Message-ID: <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "PP" == Paul Prescod <paulp at ActiveState.com> writes:

  PP> Let's discuss my proposal in the types-sig. Executive summary:

  PP> * incremental development policy
  PP> * syntax for parameter type declarations
  PP> * syntax for return type declarations
  PP> * optional runtime type checking
  PP> * goals are better runtime error reporting and method
  PP>    documentation

If your goal is really the last one, then I don't see why we need the
first four <0.9 wink>.  Let's take this to the doc-sig.

I have never felt that Python's runtime error reporting is all that
bad.  Can you provide some more motivation for this concern?  Do you
have any examples of obscure errors that will be made clearer via type
declarations?

The best example I can think of for bad runtime error reporting is a
function that expects a sequence (perhaps of strings) and is passed a
string.  Since a string is a sequence, the argument is treated as a
sequence of length-1 strings.  I'm not sure how type declarations
help, because:

    (1) You would usually want to say that the function accepts a
        sequence -- and that doesn't get you any farther.

    (2) You would often want to say that the type of the elements of
        the sequence doesn't matter -- like len -- or that the type of
        the elements matters but the function is polymorphic -- like
        min.  In either case, you seem to be ruling out types for
        these very common sorts of functions.

If documentation is really the problem you want to solve, I imagine
we'd make much more progress if we could agree on a javadoc-style
format for documentation.  The ability to add return-type declarations
to functions and methods doesn't seem like much of a win.

Jeremy



From pedroni at inf.ethz.ch  Mon Mar 12 02:34:52 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 02:34:52 +0100
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>  <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <003f01c0aa94$a3be18c0$325821c0@newmexico>

Hi.

[GvR]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().
>
That's fine for me. Will that deprecation be already active with 2.1, e.g
having locals() and param-less vars() raise a warning.
I imagine a (new) function that produce a snap-shot of the values in the
local,free and
cell vars of a scope can do the job required for simple debugging (the copy
will not allow
to modify back the values), or another approach...

regards, Samuele Pedroni




From pedroni at inf.ethz.ch  Mon Mar 12 02:39:51 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 02:39:51 +0100
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com>  <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <001c01c0aa95$55836f60$325821c0@newmexico>

Hi.

[GvR]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().
>
That's fine for me. Will that deprecation be already active with 2.1, e.g
having locals() and param-less vars() raise a warning.
I imagine a (new) function that produce a snap-shot of the values in the
local,free and cell vars of a scope can do the job required for simple 
debugging (the copy will not allow to modify back the values), 
or another approach...

In the meantime (if there's a meantime) is ok for jython to behave
the way I have explained or not? 
wrt to exec+locals()+global+nested scopes .

regards, Samuele Pedroni




From michel at digicool.com  Mon Mar 12 03:05:48 2001
From: michel at digicool.com (Michel Pelletier)
Date: Sun, 11 Mar 2001 18:05:48 -0800 (PST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <3AAC08DB.9D4E96B4@ActiveState.com>
Message-ID: <Pine.LNX.4.32.0103111745440.887-100000@localhost.localdomain>

On Sun, 11 Mar 2001, Paul Prescod wrote:

> Let's discuss my proposal in the types-sig. Executive summary:
>
>  * incremental development policy
>  * syntax for parameter type declarations
>  * syntax for return type declarations
>  * optional runtime type checking
>  * goals are better runtime error reporting and method documentation

I could be way over my head here, but I'll try to give you my ideas.

I've read the past proposals for type declarations and their
syntax, and I've also read a good bit of the types-sig archive.

I feel that there is not as much benefit to extending type declarations
into the language as their is to interfaces.  I feel this way because I'm
not sure what benefit this has over an object that describes the types you
are expecting and is associated with your object (like an interface).

The upshot of having an interface describe your expected parameter and
return types is that the type checking can be made as compile/run-time,
optional/madatory as you want without changing the language or your
implementation at all.  "Strong" checking could be done during testing,
and no checking at all during production, and any level in between.

A disadvantage of an interface is that it is a seperate, additional step
over just writing code (as are any type assertions in the language, but
those are "easier"  inline with the implementation).  But this
disadvantage is also an upshot when you immagine that the interface could
be developed later, and bolted onto the implementation later without
changing the implementation.

Also, type checking in general is good, but what about preconditions (this
parameter must be an int > 5 < 10) and postconditions and other conditions
one does now with assertions.  Would these be more language extensions in
your propsal?

As I see it, interfaces satify your first point, remove the need for your
second and third point, satify your fourth point, and meet the goals of
your fifth.

Nice to meet you at the conference,

-Michel





From greg at cosc.canterbury.ac.nz  Mon Mar 12 04:10:19 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 12 Mar 2001 16:10:19 +1300 (NZDT)
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: <003f01c0aa94$a3be18c0$325821c0@newmexico>
Message-ID: <200103120310.QAA04837@s454.cosc.canterbury.ac.nz>

Samuele Pedroni <pedroni at inf.ethz.ch>:

> I imagine a (new) function that produce a snap-shot of the values in
> the local,free and cell vars of a scope can do the job required for
> simple debugging (the copy will not allow to modify back the values)

Modifying the values doesn't cause any problem, only
adding new names to the scope. So locals() or whatever
replaces it could return a mapping object that doesn't 
allow adding any keys.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Mon Mar 12 04:25:56 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 11 Mar 2001 22:25:56 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: <200103112137.QAA13084@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEPGJEAA.tim.one@home.com>

[Guido]
> Actually, I intend to deprecate locals().  For now, globals() are
> fine.  I also intend to deprecate vars(), at least in the form that is
> equivalent to locals().

OK by me.  Note that we agreed long ago that if nested scopes ever made it
in, we would need to supply a way to get a "namespace mapping" object so that
stuff like:

    print "The value of i is %(i)s and j %(j)s" % locals()

could be replaced by:

    print "The value of i is %(i)s and j %(j)s" % namespace_map_object()

Also agreed this need not be a dict; fine by me if it's immutable too.




From ping at lfw.org  Mon Mar 12 06:01:49 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sun, 11 Mar 2001 21:01:49 -0800 (PST)
Subject: [Python-Dev] Re: Deprecating locals() (was Re: nested scopes and global: some
 corner cases)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEPGJEAA.tim.one@home.com>
Message-ID: <Pine.LNX.4.10.10103112056010.13108-100000@skuld.kingmanhall.org>

On Sun, 11 Mar 2001, Tim Peters wrote:
> OK by me.  Note that we agreed long ago that if nested scopes ever made it
> in, we would need to supply a way to get a "namespace mapping" object so that
> stuff like:
> 
>     print "The value of i is %(i)s and j %(j)s" % locals()
> 
> could be replaced by:
> 
>     print "The value of i is %(i)s and j %(j)s" % namespace_map_object()

I remarked to Jeremy at Python 9 that, given that we have new
variable lookup rules, there should be an API to perform this
lookup.  I suggested that a new method on frame objects would
be a good idea, and Jeremy & Barry seemed to agree.

I was originally thinking of frame.lookup('whatever'), but if
that method happens to be tp_getitem, then i suppose

    print "i is %(i)s and j is %(j)s" % sys.getframe()

would work.  We could call it something else, but one way or
another it's clear to me that this object has to follow lookup
rules that are completely consistent with whatever kind of
scoping is in effect (i.e. throw out *both* globals() and
locals() and provide one function that looks up the whole set
of visible names, rather than just one scope's contents).


-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From ping at lfw.org  Mon Mar 12 06:18:06 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sun, 11 Mar 2001 21:18:06 -0800 (PST)
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <Pine.LNX.4.32.0103111745440.887-100000@localhost.localdomain>
Message-ID: <Pine.LNX.4.10.10103112102030.13108-100000@skuld.kingmanhall.org>

On Sun, 11 Mar 2001, Michel Pelletier wrote:
> As I see it, interfaces satify your first point, remove the need for your
> second and third point, satify your fourth point, and meet the goals of
> your fifth.

For the record, here is a little idea i came up with on the
last day of the conference:

Suppose there is a built-in class called "Interface" with the
special property that whenever any immediate descendant of
Interface is sub-classed, we check to make sure all of its
methods are overridden.  If any methods are not overridden,
something like InterfaceException is raised.

This would be sufficient to provide very simple interfaces,
at least in terms of what methods are part of an interface
(it wouldn't do any type checking, but it could go a step
further and check the number of arguments on each method).

Example:

    >>> class Spam(Interface):
    ...     def islovely(self): pass
    ...
    >>> Spam()
    TypeError: interfaces cannot be instantiated
    >>> class Eggs(Spam):
    ...     def scramble(self): pass
    ...
    InterfaceError: class Eggs does not implement interface Spam
    >>> class LovelySpam(Spam):
    ...     def islovely(self): return 1
    ...
    >>> LovelySpam()
    <LovelySpam instance at ...>

Essentially this would replace the convention of writing a
whole bunch of methods that raise NotImplementedError as a
way of describing an abstract interface, making it a bit easier
to write and causing interfaces to be checked earlier (upon
subclassing, rather than upon method call).

It should be possible to implement this in Python using metaclasses.


-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From uche.ogbuji at fourthought.com  Mon Mar 12 08:11:27 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Mon, 12 Mar 2001 00:11:27 -0700
Subject: [Python-Dev] Revive the types sig? 
In-Reply-To: Message from Jeremy Hylton <jeremy@alum.mit.edu> 
   of "Sun, 11 Mar 2001 20:22:04 EST." <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103120711.AAA09711@localhost.localdomain>

Jeremy Hylton:

> If documentation is really the problem you want to solve, I imagine
> we'd make much more progress if we could agree on a javadoc-style
> format for documentation.  The ability to add return-type declarations
> to functions and methods doesn't seem like much of a win.

I know this isn't the types SIG and all, but since it has come up here, I'd 
like to (once again) express my violent disagreement with the efforts to add 
static typing to Python.  After this, I won't pursue the thread further here.

I used to agree with John Max Skaller that if any such beast were needed, it 
should be a more general system for asserting correctness, but I now realize 
that even that avenue might lead to madness.

Python provides more than enough power for any programmer to impose their own 
correctness tests, including those for type-safety.  Paul has pointed out to 
me that the goal of the types SIG is some mechanism that would not affect 
those of us who want nothing to do with static typing; but my fear is that 
once the decision is made to come up with something, such considerations might 
be the first out the window.  Indeed, the last round of talks produced some 
very outre proposals.

Type errors are not even close to the majority of those I make while 
programming in Python, and I'm quite certain that the code I've written in 
Python is much less buggy than code I've written in strongly-typed languages.  
Expressiveness, IMO, is a far better aid to correctness than artificial 
restrictions (see Java for the example of school-marm programming gone amok).

If I understand Jeremy correctly, I am in strong agreement that it is at least 
worth trying the structured documentation approach to signalling pre- and 
post-conditions before turning Python into a rather different language.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From tim.one at home.com  Mon Mar 12 08:30:03 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 02:30:03 -0500
Subject: [Python-Dev] RE: Revive the types sig?
In-Reply-To: <200103120711.AAA09711@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEACJFAA.tim.one@home.com>

Could we please prune followups on this to the Types-SIG now?  I don't really
need to see three copies of every msg, and everyone who has the slightest
interest in the topic should already be on the Types-SIG.

grumpily y'rs  - tim




From mwh21 at cam.ac.uk  Mon Mar 12 09:24:03 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 08:24:03 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Guido van Rossum's message of "Sun, 11 Mar 2001 18:55:03 -0500"
References: <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
Message-ID: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> Here's the third installment -- my response to Moshe's rational
> numbers PEP.

I'm replying to Guido mainly through laziness.

> > PEP: XXX
> > Title: Adding a Rational Type to Python
> > Version: $Revision$
> > Author: pep at zadka.site.co.il (Moshe Zadka)
> > Status: Draft
> > Python-Version: 2.2
> > Type: Standards Track
> > Created: 11-Mar-2001
> > Post-History:
> > 
> > 
> > Abstract
> > 
> >     Python has no number type whose semantics are that of a
> >     unboundedly precise rational number.
> 
> But one could easily be added to the standard library, and several
> implementations exist, including one in the standard distribution:
> Demo/classes/Rat.py.
> 
> >     This proposal explains the
> >     semantics of such a type, and suggests builtin functions and
> >     literals to support such a type. In addition, if division of
> >     integers would return a non-integer, it could also return a
> >     rational type.
> 
> It's kind of sneaky not to mention in the abstract that this should be
> the default representation for numbers containing a decimal point,
> replacing most use of floats!

If "/" on integers returns a rational (as I presume it will if
rationals get in as it's the only sane return type), then can we
please have the default way of writing rationals as "p/q"?  OK, so it
might be inefficient (a la complex numbers), but it should be trivial
to optimize if required.

Having ddd.ddd be a rational bothers me.  *No* langauge does that at
present, do they?  Also, writing rational numbers as decimal floats
strikes me s a bit loopy.  Is 

  0.33333333

1/3 or 3333333/10000000?

Certainly, if it's to go in, I'd like to see

> > Literals
> > 
> >     Literals conforming to the RE '\d*.\d*' will be rational numbers.

in the PEP as justification.

Cheers,
M.

-- 
  MAN:  How can I tell that the past isn't a fiction designed to
        account for the discrepancy between my immediate physical
        sensations and my state of mind?
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 12




From tim.one at home.com  Mon Mar 12 09:52:49 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 03:52:49 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com>

[Michael Hudson]
> ...
> Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> present, do they?

ABC (Python's closest predecessor) did.  6.02e23 and 1.073242e-301 were also
exact rationals.  *All* numeric literals were.  This explains why they aren't
in Python, but doesn't explain exactly why:  i.e., it didn't work well in
ABC, but it's unclear whether that's because rationals suck, or because you
got rationals even when 10,000 years of computer history <wink> told you that
"." would get you something else.

> Also, writing rational numbers as decimal floats strikes me as a
> bit loopy.  Is
>
>   0.33333333
>
> 1/3 or 3333333/10000000?

Neither, it's 33333333/100000000 (which is what I expect you intended for
your 2nd choice).  Else

    0.33333333 == 33333333/100000000

would be false, and

    0.33333333 * 3 == 1

would be true, and those are absurd if both sides are taken as rational
notations.  OTOH, it's possible to do rational<->string conversion with an
extended notation for "repeating decimals", e.g.

   str(1/3) == "0.(3)"
   eval("0.(3)") == 1/3

would be possible (indeed, I've implemented it in my own rational classes,
but not by default since identifying "the repeating part" in rat->string can
take space proportional to the magnitude of the denominator).

but-"."-is-mnemonic-for-the-"point"-in-"floating-point"-ly y'rs  - tim




From moshez at zadka.site.co.il  Mon Mar 12 12:51:36 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 13:51:36 +0200 (IST)
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com>
Message-ID: <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>

On 12 Mar 2001 08:24:03 +0000, Michael Hudson <mwh21 at cam.ac.uk> wrote:
 
> If "/" on integers returns a rational (as I presume it will if
> rationals get in as it's the only sane return type), then can we
> please have the default way of writing rationals as "p/q"?

That's proposed in a different PEP. Personally (*shock*) I'd like
all my PEPs to go in, but we sort of agreed that they will only
get in if they can get in in seperate pieces.
  
> Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> present, do they?  Also, writing rational numbers as decimal floats
> strikes me s a bit loopy.  Is 
> 
>   0.33333333
> 
> 1/3 or 3333333/10000000?

The later. But decimal numbers *are* rationals...just the denominator
is always a power of 10.

> Certainly, if it's to go in, I'd like to see
> 
> > > Literals
> > > 
> > >     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> 
> in the PEP as justification.
 
I'm not understanding you. Do you think it needs more justification, or
that it is justification for something?
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From mwh21 at cam.ac.uk  Mon Mar 12 13:03:17 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 12:03:17 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: "Tim Peters"'s message of "Mon, 12 Mar 2001 03:52:49 -0500"
References: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com>
Message-ID: <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> [Michael Hudson]
> > ...
> > Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> > present, do they?
> 
> ABC (Python's closest predecessor) did.  6.02e23 and 1.073242e-301
> were also exact rationals.  *All* numeric literals were.  This
> explains why they aren't in Python, but doesn't explain exactly why:
> i.e., it didn't work well in ABC, but it's unclear whether that's
> because rationals suck, or because you got rationals even when
> 10,000 years of computer history <wink> told you that "." would get
> you something else.

Well, it seems likely that it wouldn't work in Python too, doesn't it?
Especially with 10010 years of computer history.

> > Also, writing rational numbers as decimal floats strikes me as a
> > bit loopy.  Is
> >
> >   0.33333333
> >
> > 1/3 or 3333333/10000000?
> 
> Neither, it's 33333333/100000000 (which is what I expect you intended for
> your 2nd choice).

Err, yes.  I was feeling too lazy to count 0's.

[snip]
> OTOH, it's possible to do rational<->string conversion with an
> extended notation for "repeating decimals", e.g.
> 
>    str(1/3) == "0.(3)"
>    eval("0.(3)") == 1/3
> 
> would be possible (indeed, I've implemented it in my own rational
> classes, but not by default since identifying "the repeating part"
> in rat->string can take space proportional to the magnitude of the
> denominator).

Hmm, I wonder what the repr of rational(1,3) is...

> but-"."-is-mnemonic-for-the-"point"-in-"floating-point"-ly y'rs  - tim

Quite.

Cheers,
M.

-- 
  Slim Shady is fed up with your shit, and he's going to kill you.
                         -- Eminem, "Public Service Announcement 2000"




From mwh21 at cam.ac.uk  Mon Mar 12 13:07:19 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 12 Mar 2001 12:07:19 +0000
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Moshe Zadka's message of "Mon, 12 Mar 2001 13:51:36 +0200 (IST)"
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk> <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <m3wv9v6vig.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez at zadka.site.co.il> writes:

> On 12 Mar 2001 08:24:03 +0000, Michael Hudson <mwh21 at cam.ac.uk> wrote:
>  
> > If "/" on integers returns a rational (as I presume it will if
> > rationals get in as it's the only sane return type), then can we
> > please have the default way of writing rationals as "p/q"?
> 
> That's proposed in a different PEP. Personally (*shock*) I'd like
> all my PEPs to go in, but we sort of agreed that they will only
> get in if they can get in in seperate pieces.

Fair enough.

> > Having ddd.ddd be a rational bothers me.  *No* langauge does that at
> > present, do they?  Also, writing rational numbers as decimal floats
> > strikes me s a bit loopy.  Is 
> > 
> >   0.33333333
> > 
> > 1/3 or 3333333/10000000?
> 
> The later. But decimal numbers *are* rationals...just the denominator
> is always a power of 10.

Well, floating point numbers are rationals too, only the denominator
is always a power of 2 (or sixteen, if you're really lucky).

I suppose I don't have any rational (groan) objections, but it just
strikes me instinctively as a Bad Idea.

> > Certainly, if it's to go in, I'd like to see
                                                 ^
                                             "more than"
sorry.

> > > > Literals
> > > > 
> > > >     Literals conforming to the RE '\d*.\d*' will be rational numbers.
> > 
> > in the PEP as justification.
>  
> I'm not understanding you. Do you think it needs more justification,
> or that it is justification for something?

I think it needs more justification.

Well, actually I think it should be dropped, but if that's not going
to happen, then it needs more justification.

Cheers,
M.

-- 
  To summarise the summary of the summary:- people are a problem.
                   -- The Hitch-Hikers Guide to the Galaxy, Episode 12




From paulp at ActiveState.com  Mon Mar 12 13:27:29 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 04:27:29 -0800
Subject: [Python-Dev] Adding a Rational Type to Python
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <3AACC0B1.4AD48247@ActiveState.com>

Whether or not Python adopts rationals as the default number type, a
rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
2.2.

I think that Python users should be allowed to experiment with it before
it becomes the default. If I recode my existing programs to use
rationals and they experience an exponential slow-down, that might
influence my recommendation to Guido. 
-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From thomas at xs4all.net  Mon Mar 12 14:16:00 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 14:16:00 +0100
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>; from mwh21@cam.ac.uk on Mon, Mar 12, 2001 at 12:03:17PM +0000
References: <LNBBLJKPBEHFEDALKOLCOEADJFAA.tim.one@home.com> <m3zoer6vp6.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <20010312141600.Q404@xs4all.nl>

On Mon, Mar 12, 2001 at 12:03:17PM +0000, Michael Hudson wrote:

> Hmm, I wonder what the repr of rational(1,3) is...

Well, 'rational(1,3)', of course. Unless 1/3 returns a rational, in which
case it can just return '1/3' :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Mon Mar 12 14:51:22 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 08:51:22 -0500
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:39:51 +0100."
             <001c01c0aa95$55836f60$325821c0@newmexico> 
References: <LNBBLJKPBEHFEDALKOLCOEMPJEAA.tim.one@home.com> <200103112137.QAA13084@cj20424-a.reston1.va.home.com>  
            <001c01c0aa95$55836f60$325821c0@newmexico> 
Message-ID: <200103121351.IAA18642@cj20424-a.reston1.va.home.com>

> [GvR]
> > Actually, I intend to deprecate locals().  For now, globals() are
> > fine.  I also intend to deprecate vars(), at least in the form that is
> > equivalent to locals().

[Samuele]
> That's fine for me. Will that deprecation be already active with 2.1, e.g
> having locals() and param-less vars() raise a warning.

Hm, I hadn't thought of doing it right now.

> I imagine a (new) function that produce a snap-shot of the values in the
> local,free and cell vars of a scope can do the job required for simple 
> debugging (the copy will not allow to modify back the values), 
> or another approach...

Maybe.  I see two solutions: a function that returns a copy, or a
function that returns a "lazy mapping".  The former could be done as
follows given two scopes:

def namespace():
    d = __builtin__.__dict__.copy()
    d.update(globals())
    d.update(locals())
    return d

The latter like this:

def namespace():
    class C:
        def __init__(self, g, l):
            self.__g = g
            self.__l = l
        def __getitem__(self, key):
            try:
                return self.__l[key]
            except KeyError:
                try:
                    return self.__g[key]
                except KeyError:
                    return __builtin__.__dict__[key]
    return C(globals(), locals())

But of course they would have to work harder to deal with nested
scopes and cells etc.

I'm not sure if we should add this to 2.1 (if only because it's more
work than I'd like to put in this late in the game) and then I'm not
sure if we should deprecate locals() yet.

> In the meantime (if there's a meantime) is ok for jython to behave
> the way I have explained or not? 
> wrt to exec+locals()+global+nested scopes .

Sure.  You may even document it as one of the known differences.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 15:50:44 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:50:44 -0500
Subject: [Python-Dev] Re: Unifying Long Integers and Integers
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:25:23 +0200."
             <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il> 
References: <200103112325.SAA14311@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>  
            <20010312002523.94F68AA3A@darjeeling.zadka.site.co.il> 
Message-ID: <200103121450.JAA19125@cj20424-a.reston1.va.home.com>

> [re: numerical PEPs mailing list] 
> > Please help yourself.  I recommend using SF since it requires less
> > overhead for the poor python.org sysadmins.
> 
> Err...I can't. Requesting an SF mailing list is an admin operation.

OK.  I won't make the request (too much going on still) so please ask
someone else at PythonLabs to do it.  Don't just sit there waiting for
one of us to read this mail and do it!

> What do you think? Should I issue a warning on shifting an integer so
> it would be cut/signed in the old semantics?

You'll have to, because the change in semantics will definitely break
some code.

> It also means overflow raises a different exception. Again, I suspect
> it will be used only in cases where the algorithm is supposed to maintain
> that internal results are not bigger then the inputs or things like that,
> and there only as a debugging aid -- so I don't think that this would be this
> bad. And if people want to avoid using the longs for performance reasons,
> then the implementation should definitely *not* lie to them.

It's not clear that using something derived from the machine word size
is the most helpful here.  Maybe a separate integral type that has a
limited range should be used for this.

> OK, so what's the concrete advice?

Propose both alternatives in the PEP.  It's too early to make
decisions -- first we need to have a catalog of our options, and their
consequences.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 12 15:52:20 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:52:20 -0500
Subject: [Python-Dev] Re: Non-integer Division
In-Reply-To: Your message of "Mon, 12 Mar 2001 02:36:58 +0200."
             <20010312003658.01096AA27@darjeeling.zadka.site.co.il> 
References: <200103112337.SAA14344@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>  
            <20010312003658.01096AA27@darjeeling.zadka.site.co.il> 
Message-ID: <200103121452.JAA19139@cj20424-a.reston1.va.home.com>

> > > // Operator
> > 
> > Note: we could wind up using a different way to spell this operator,
> > e.g. Pascal uses 'div'.  The disadvantage of 'div' is that it
> > introduces a new reserved word, with all the issues it creates.  The
> > disadvantage of '//' is that it means something very different to Java
> > and C++ users.
> 
> I have zero (0) intuition about what is better. You choose --- I have
> no opinions on this. If we do go the "div" route, I need to also think
> up a syntactic migration path once I figure out the parsing issues
> involved. This isn't an argument -- just something you might want to 
> consider before pronouncing on "div".

As I said in the other thread, it's too early to make the decision --
just present both options in the PEP, and arguments pro/con for each.

> > Maybe for compatibility of bytecode files we should come up with a
> > better name, e.g. FLOAT_DIV?
> 
> Hmmmm.....a bytecode files so far have failed to be compatible for
> any revision. I have no problems with that, just that I feel that if
> we're serious about comptability, we should say so, and if we're not,
> then half-assed measures will not help.

Fair enough.

> [re: from __future__ import non_integer_division] 
> > I find "non_integer_division" rather long.  Maybe it should be called
> > "float_division"?
> 
> I have no problems with that -- except that if the rational PEP is accepted,
> then this would rational_integer_division, and I didn't want to commit
> myself yet.

Understood.

> You haven't commented yet about the rational PEP, so I don't know if that's
> even an option.

Yes I have, but in summary, I still think rationals are a bad idea.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Mon Mar 12 15:55:31 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 12 Mar 2001 16:55:31 +0200 (IST)
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <3AACC0B1.4AD48247@ActiveState.com>
References: <3AACC0B1.4AD48247@ActiveState.com>, <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>
Message-ID: <20010312145531.649E1AA27@darjeeling.zadka.site.co.il>

On Mon, 12 Mar 2001, Paul Prescod <paulp at ActiveState.com> wrote:

> Whether or not Python adopts rationals as the default number type, a
> rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> 2.2.

OK, how about this:

1. I remove the "literals" part from my PEP to another PEP
2. I add to rational() an ability to take strings, such as "1.3" and 
   make rationals out of them

Does anyone have any objections to

a. doing that
b. the PEP that would result from 1+2
?

I even volunteer to code the first prototype.
 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Mon Mar 12 15:57:31 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 09:57:31 -0500
Subject: [Python-Dev] Re: Adding a Rational Type to Python
In-Reply-To: Your message of "Mon, 12 Mar 2001 03:00:25 +0200."
             <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il> 
References: <200103112355.SAA14385@cj20424-a.reston1.va.home.com>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il>  
            <20010312010025.C93A7AA27@darjeeling.zadka.site.co.il> 
Message-ID: <200103121457.JAA19188@cj20424-a.reston1.va.home.com>

> > Question: the time module's time() function currently returns a
> > float.  Should it return a rational instead?  This is a trick question.
> 
> It should return the most exact number the underlying operating system
> supports. For example, in OSes supporting gettimeofday, return a rational
> built from tv_sec and tv_usec.

I told you it was a trick question. :-)

Time may be *reported* in microseconds, but it's rarely *accurate* to
microseconds.  Because the precision is unclear, I think a float is
more appropriate here.

--Guido van Rossum (home page: http://www.python.org/~guido/)




From paulp at ActiveState.com  Mon Mar 12 16:09:37 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 07:09:37 -0800
Subject: [Python-Dev] Adding a Rational Type to Python
References: <3AACC0B1.4AD48247@ActiveState.com>, <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il> <20010312145531.649E1AA27@darjeeling.zadka.site.co.il>
Message-ID: <3AACE6B1.A599279D@ActiveState.com>

Moshe Zadka wrote:
> 
> On Mon, 12 Mar 2001, Paul Prescod <paulp at ActiveState.com> wrote:
> 
> > Whether or not Python adopts rationals as the default number type, a
> > rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> > 2.2.
> 
> OK, how about this:
> 
> 1. I remove the "literals" part from my PEP to another PEP
> 2. I add to rational() an ability to take strings, such as "1.3" and
>    make rationals out of them

+1

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From guido at digicool.com  Mon Mar 12 16:09:15 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 10:09:15 -0500
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: Your message of "Mon, 12 Mar 2001 04:27:29 PST."
             <3AACC0B1.4AD48247@ActiveState.com> 
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il>  
            <3AACC0B1.4AD48247@ActiveState.com> 
Message-ID: <200103121509.KAA19299@cj20424-a.reston1.va.home.com>

> Whether or not Python adopts rationals as the default number type, a
> rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> 2.2.
> 
> I think that Python users should be allowed to experiment with it before
> it becomes the default. If I recode my existing programs to use
> rationals and they experience an exponential slow-down, that might
> influence my recommendation to Guido. 

Excellent idea.  Moshe is already biting:

[Moshe]
> On Mon, 12 Mar 2001, Paul Prescod <paulp at ActiveState.com> wrote:
> 
> > Whether or not Python adopts rationals as the default number type, a
> > rational() built-in would be a Good Thing. I'd like to see it in 2.1 or
> > 2.2.
> 
> OK, how about this:
> 
> 1. I remove the "literals" part from my PEP to another PEP
> 2. I add to rational() an ability to take strings, such as "1.3" and 
>    make rationals out of them
> 
> Does anyone have any objections to
> 
> a. doing that
> b. the PEP that would result from 1+2
> ?
> 
> I even volunteer to code the first prototype.

I think that would make it a better PEP, and I recommend doing this,
because nothing can be so convincing as a working prototype!

Even so, I'm not sure that rational() should be added to the standard
set of built-in functions, but I'm much less opposed this than I am
against making 0.5 or 1/2 return a rational.  After all we have
complex(), so there's certainly a case to be made for rational().

Note: if you call it fraction() instead, it may appeal more to the
educational crowd!  (In grade school, we learn fractions; not until
late in high school do we learn that mathematicials call fractions
rationals.  It's the same as Randy Paush's argument about what to call
a quarter turn: not 90 degrees, not pi/2, just call it 1/4 turn. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Mon Mar 12 16:55:12 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 16:55:12 +0100
Subject: [Python-Dev] Adding a Rational Type to Python
In-Reply-To: <200103121509.KAA19299@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 10:09:15AM -0500
References: <m34rwz8kf0.fsf@atrus.jesus.cam.ac.uk>, <20010311151944.716EAAA3A@darjeeling.zadka.site.co.il> <200103112355.SAA14385@cj20424-a.reston1.va.home.com> <20010312115136.C2697AA27@darjeeling.zadka.site.co.il> <3AACC0B1.4AD48247@ActiveState.com> <200103121509.KAA19299@cj20424-a.reston1.va.home.com>
Message-ID: <20010312165512.S404@xs4all.nl>

On Mon, Mar 12, 2001 at 10:09:15AM -0500, Guido van Rossum wrote:

> Note: if you call it fraction() instead, it may appeal more to the
> educational crowd!  (In grade school, we learn fractions; not until
> late in high school do we learn that mathematicials call fractions
> rationals.  It's the same as Randy Paush's argument about what to call
> a quarter turn: not 90 degrees, not pi/2, just call it 1/4 turn. :-)

+1 on fraction(). +0 on making it a builtin instead of a separate module.
(I'm not nearly as worried about adding builtins as I am with adding
keywords <wink>)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From pedroni at inf.ethz.ch  Mon Mar 12 17:47:22 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 17:47:22 +0100 (MET)
Subject: [Python-Dev] about sparse inputs from the jython userbase & types, language extensions
Message-ID: <200103121647.RAA15331@core.inf.ethz.ch>

Hi.

What follows is maybe to abstract or naive to be useful, if reading this is 
waste of time: sorry.
Further I ignore the content of the P3K kick-start session...

"We" are planning to add many features to python. It has also
been explicitly written that this is for the developers to have fun too ;).

Exact arithmetic, behind the scene promotion on overflow, etc...
nested scopes, iterators

A bit joking: lim(t->oo) python ~ Common Lisp

Ok, in python programs and data are not that much the same,
we don't have CL macros (but AFAIK dylan is an example of a language
without data&programs having the same structure but with CL-like macros , so 
maybe...), and "we" are not as masochistic as a commitee can be, and we
have not the all the history that CL should carry.

Python does not have (by now) optional static typing (CL has such a beast, 
everybody knows), but this is always haunting around, mainly for documentation
and error checking purposes.

Many of the proposals also go in the direction of making life easier
for newbies, even for programming newbies...
(this is not a paradox, a regular and well chosen subset of CL can
be appopriate for them and the world knows a beast called scheme).

Joke: making newbie happy is dangerous, then they will never want
to learn C ;)

The point: what is some (sparse) part of jython user base asking for?

1. better java intergration (for sure).
2. p-e-r-f-o-r-m-a-n-c-e

They ask why is jython so slow, why it does not exploit unboxed int or float
(the more informed one),
whether it is not possible to translate jython to java achieving performance...

The python answer about performance is:
- Think, you don't really need it,
- find the hotspot and code it in C,
- programmer speed is more important than pure program speed,
- python is just a glue language
Jython one is not that different.

If someone comes from C or much java this is fair.
For the happy newbie that's deceiving. (And can become
frustrating even for the experienced open-source programmer
 that wants to do more in less time: be able to do as much things
 as possible in python would be nice <wink>).

If python importance will increase IMHO this will become a real issue
(also from java, people is always asking for more performance).

Let some software house give them the right amount of perfomance  and dynamism
out of python for $xK (that what happens nowaday with CL), even more deceiving.

(I'm aware that dealing with that, also from a purely code complexity viewpoint,
may be too much for an open project in term of motivation)

regards, Samuele Pedroni.

PS: I'm aware of enough theoretical approaches to performance to know
that optional typing is just one of the possible, the point is that
performance as an issue should not be underestimated.




From pedroni at inf.ethz.ch  Mon Mar 12 21:23:25 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 12 Mar 2001 21:23:25 +0100 (MET)
Subject: [Python-Dev] Deprecating locals() (was Re: nested scopes and global: some corner cases)
Message-ID: <200103122023.VAA20984@core.inf.ethz.ch>

Hi.

[GvR]
> > I imagine a (new) function that produce a snap-shot of the values in the
> > local,free and cell vars of a scope can do the job required for simple 
> > debugging (the copy will not allow to modify back the values), 
> > or another approach...
> 
> Maybe.  I see two solutions: a function that returns a copy, or a
> function that returns a "lazy mapping".  The former could be done as
> follows given two scopes:
> 
> def namespace():
>     d = __builtin__.__dict__.copy()
>     d.update(globals())
>     d.update(locals())
>     return d
> 
> The latter like this:
> 
> def namespace():
>     class C:
>         def __init__(self, g, l):
>             self.__g = g
>             self.__l = l
>         def __getitem__(self, key):
>             try:
>                 return self.__l[key]
>             except KeyError:
>                 try:
>                     return self.__g[key]
>                 except KeyError:
>                     return __builtin__.__dict__[key]
>     return C(globals(), locals())
> 
> But of course they would have to work harder to deal with nested
> scopes and cells etc.
> 
> I'm not sure if we should add this to 2.1 (if only because it's more
> work than I'd like to put in this late in the game) and then I'm not
> sure if we should deprecate locals() yet.
But in any case we would need something similar to repair pdb,
this independently of locals deprecation...

Samuele.




From thomas at xs4all.net  Mon Mar 12 22:04:31 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 12 Mar 2001 22:04:31 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
Message-ID: <20010312220425.T404@xs4all.nl>

Contrary to Guido's keynote last week <wink> there are still two warts I
know of in the current CPython. One is the fact that keywords cannot be used
as identifiers anywhere, the other is the fact that 'continue' can still not
be used inside a 'finally' clause. If I remember correctly, the latter isn't
too hard to fix, it just needs a decision on what it should do :)

Currently, falling out of a 'finally' block will reraise the exception, if
any. Using 'return' and 'break' will drop the exception and continue on as
usual. However, that makes sense (imho) mostly because 'break' will continue
past the try/finally block and 'return' will break out of the function
altogether. Neither have a chance of reentering the try/finally block
altogether. I'm not sure if that would make sense for 'continue' inside
'finally'.

On the other hand, I'm not sure if it makes sense for 'break' to continue
but for 'continue' to break. :)

As for the other wart, I still want to fix it, but I'm not sure when I get
the chance to grok the parser-generator enough to actually do it :) 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From msw at redhat.com  Mon Mar 12 22:47:05 2001
From: msw at redhat.com (Matt Wilson)
Date: Mon, 12 Mar 2001 16:47:05 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
Message-ID: <20010312164705.C641@devserv.devel.redhat.com>

We've been auditing various code lately to check for /tmp races and so
on.  It seems that tempfile.mktemp() is used throughout the Python
library.  While nice and portable, tempfile.mktemp() is vulnerable to
races.

The TemporaryFile does a nice job of handling the filename returned by
mktemp properly, but there are many modules that don't.

Should I attempt to patch them all to use TemporaryFile?  Or set up
conditional use of mkstemp on those systems that support it?

Cheers,

Matt
msw at redhat.com



From DavidA at ActiveState.com  Mon Mar 12 23:01:02 2001
From: DavidA at ActiveState.com (David Ascher)
Date: Mon, 12 Mar 2001 14:01:02 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
Message-ID: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com>

With apologies for the delay, here are my notes from the numeric coercion
day.

There were many topics which were defined by the Timbot to be within the
scope of the discussion.  Those included:

  - Whether numbers should be rationals / binary FP / decimal FP / etc.
  - Whether there should be support for both exact and inexact computations
  - What division means.

There were few "deliverables" at the end of the day, mostly a lot of
consternation on all sides of the multi-faceted divide, with the impression
in at least this observer's mind that there are few things more
controversial than what numbers are for and how they should work.  A few
things emerged, however:

  0) There is tension between making math in Python 'understandable' to a
high-school kid and making math in Python 'useful' to an engineer/scientist.

  1) We could consider using the new warnings framework for noting things
which are "dangerous" to do with numbers, such as:

       - noting that an operation on 'plain' ints resulted in a 'long'
result.
       - using == when comparing floating point numbers

  2) The Fortran notion of "Kind" as an orthogonal notion to "Type" may make
sense (details to be fleshed out).

  3) Pythonistas are good at quotes:

     "You cannot stop people from complaining, but you can influence what
they
      complain about." - Tim Peters

     "The only problem with using rationals for money is that money, is,
      well, not rational." - Moshe Zadka

     "Don't get too apoplectic about this." - Tim Peters

  4) We all agreed that "2" + "23" will not equal "25".

--david ascher




From Greg.Wilson at baltimore.com  Mon Mar 12 23:29:31 2001
From: Greg.Wilson at baltimore.com (Greg Wilson)
Date: Mon, 12 Mar 2001 17:29:31 -0500
Subject: [Python-Dev] more Solaris extension grief
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC593@nsamcanms1.ca.baltimore.com>

I just updated my copy of Python from the CVS repo,
rebuilt on Solaris 5.8, and tried to compile an
extension that is built on top of C++.  I am now
getting lots 'n' lots of error messages as shown
below.  My compile line is:

gcc -shared  ./PyEnforcer.o  -L/home/gvwilson/cozumel/merlot/enforcer
-lenforcer -lopenssl -lstdc++  -o ./PyEnforcer.so

Has anyone seen this problem before?  It does *not*
occur on Linux, using the same version of g++.

Greg

p.s. I configured Python --with-gcc=g++

Text relocation remains                         referenced
    against symbol                  offset      in file
istream type_info function          0x1c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
istream type_info function          0x18
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdiostream.o
)
_IO_stderr_buf                      0x2c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_stderr_buf                      0x28
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_default_xsputn                  0xc70
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
_IO_default_xsputn                  0xa4
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(streambuf.o)
lseek                               0xa74
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
_IO_str_init_readonly               0x620
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
_IO_stdout_buf                      0x24
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_stdout_buf                      0x38
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(stdstreams.o)
_IO_file_xsputn                     0x43c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filebuf.o)
fstat                               0xa8c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(fileops.o)
streambuf::sputbackc(char)          0x68c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x838
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x8bc
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x1b4c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x1b80
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x267c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
streambuf::sputbackc(char)          0x26f8
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(iostream.o)
_IO_file_stat                       0x40c
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filebuf.o)
_IO_setb                            0x844
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(genops.o)
_IO_setb                            0x210
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strops.o)
_IO_setb                            0xa8
/usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(filedoalloc.o
)
... and so on and so on ...



From barry at digicool.com  Tue Mar 13 00:15:15 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:15:15 -0500
Subject: [Python-Dev] Revive the types sig? 
References: <jeremy@alum.mit.edu>
	<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103120711.AAA09711@localhost.localdomain>
Message-ID: <15021.22659.616556.298360@anthem.wooz.org>

>>>>> "UO" == Uche Ogbuji <uche.ogbuji at fourthought.com> writes:

    UO> I know this isn't the types SIG and all, but since it has come
    UO> up here, I'd like to (once again) express my violent
    UO> disagreement with the efforts to add static typing to Python.
    UO> After this, I won't pursue the thread further here.

Thank you Uche!  I couldn't agree more, and will also try to follow
your example, at least until we see much more concrete proposals from
the types-sig.  I just want to make a few comments for the record.

First, it seemed to me that the greatest push for static type
annotations at IPC9 was from the folks implementing Python on top of
frameworks other than C.  I know from my own experiences that there is
the allure of improved performance, e.g. JPython, given type hints
available to the compiler.  While perhaps a laudable goal, this
doesn't seem to be a stated top priority of Paul's.

Second, if type annotations are to be seriously considered for
inclusion in Python, I think we as a community need considerable
experience with a working implementation.  Yes, we need PEPs and specs
and such, but we need something real and complete that we can play
with, /without/ having to commit to its acceptance in mainstream
Python.  Therefore, I think it'll be very important for type
annotation proponents to figure out a way to allow people to see and
play with an implementation in an experimental way.

This might mean an extensive set of patches, a la Stackless.  After
seeing and talking to Neil and Andrew about PTL and Quixote, I think
there might be another way.  It seems that their approach might serve
as a framework for experimental Python syntaxes with minimal overhead.
If I understand their work correctly, they have their own compiler
which is built on Jeremy's tools, and which accepts a modified Python
grammar, generating different but compatible bytecode sequences.
E.g., their syntax has a "template" keyword approximately equivalent
to "def" and they do something different with bare strings left on the
stack.

The key trick is that it all hooks together with an import hook so
normal Python code doesn't need to know anything about the mechanics
of PTL compilation.  Given a homepage.ptl file, they just do an
"import homepage" and this gets magically transformed into a .ptlc
file and normal Python objects.

If I've got this correct, it seems like it would be a powerful tool
for playing with alternative Python syntaxes.  Ideally, the same
technique would allow the types-sig folks to create a working
implementation that would require only the installation of an import
hook.  This would let them build their systems with type annotation
and prove to the skeptical among us of their overwhelming benefit.

Cheers,
-Barry



From guido at digicool.com  Tue Mar 13 00:19:39 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:19:39 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Mon, 12 Mar 2001 14:01:02 PST."
             <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> 
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> 
Message-ID: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>

> With apologies for the delay, here are my notes from the numeric coercion
> day.
> 
> There were many topics which were defined by the Timbot to be within the
> scope of the discussion.  Those included:
> 
>   - Whether numbers should be rationals / binary FP / decimal FP / etc.
>   - Whether there should be support for both exact and inexact computations
>   - What division means.
> 
> There were few "deliverables" at the end of the day, mostly a lot of
> consternation on all sides of the multi-faceted divide, with the impression
> in at least this observer's mind that there are few things more
> controversial than what numbers are for and how they should work.  A few
> things emerged, however:
> 
>   0) There is tension between making math in Python 'understandable' to a
> high-school kid and making math in Python 'useful' to an engineer/scientist.
> 
>   1) We could consider using the new warnings framework for noting things
> which are "dangerous" to do with numbers, such as:
> 
>        - noting that an operation on 'plain' ints resulted in a 'long'
> result.
>        - using == when comparing floating point numbers
> 
>   2) The Fortran notion of "Kind" as an orthogonal notion to "Type" may make
> sense (details to be fleshed out).
> 
>   3) Pythonistas are good at quotes:
> 
>      "You cannot stop people from complaining, but you can influence what
> they
>       complain about." - Tim Peters
> 
>      "The only problem with using rationals for money is that money, is,
>       well, not rational." - Moshe Zadka
> 
>      "Don't get too apoplectic about this." - Tim Peters
> 
>   4) We all agreed that "2" + "23" will not equal "25".
> 
> --david ascher

Thanks for the notes.  I couldn't be at the meeting, but I attended a
post-meeting lunch roundtable, where much of the above confusion was
reiterated for my convenience.  Moshe's three or four PEPs also came
out of that.  One thing we *could* agree to there, after I pressed
some people: 1/2 should return 0.5.  Possibly 1/2 should not be a
binary floating point number -- but then 0.5 shouldn't either, and
whatever happens, these (1/2 and 0.5) should have the same type, be it
rational, binary float, or decimal float.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 13 00:23:06 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:23:06 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: Your message of "Mon, 12 Mar 2001 16:47:05 EST."
             <20010312164705.C641@devserv.devel.redhat.com> 
References: <20010312164705.C641@devserv.devel.redhat.com> 
Message-ID: <200103122323.SAA22876@cj20424-a.reston1.va.home.com>

> We've been auditing various code lately to check for /tmp races and so
> on.  It seems that tempfile.mktemp() is used throughout the Python
> library.  While nice and portable, tempfile.mktemp() is vulnerable to
> races.
> 
> The TemporaryFile does a nice job of handling the filename returned by
> mktemp properly, but there are many modules that don't.
> 
> Should I attempt to patch them all to use TemporaryFile?  Or set up
> conditional use of mkstemp on those systems that support it?

Matt, please be sure to look at the 2.1 CVS tree.  I believe that
we've implemented some changes that may make mktemp() better behaved.

If you find that this is still not good enough, please feel free to
submit a patch to SourceForge that fixes the uses of mktemp() --
insofar possible.  (I know e.g. the test suite has some places where
mktemp() is used as the name of a dbm file.)

Thanks for looking into this!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From esr at snark.thyrsus.com  Tue Mar 13 00:36:00 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Mon, 12 Mar 2001 18:36:00 -0500
Subject: [Python-Dev] CML2 compiler slowness
Message-ID: <200103122336.f2CNa0W28998@snark.thyrsus.com>

(Copied to python-dev for informational purposes.)

I added some profiling apparatus to the CML2 compiler and investigated
mec's reports of a twenty-second startup.  I've just released the
version with profiling as 0.9.3, with fixes for all known bugs.

Nope, it's not the quadratic-time validation pass that's eating all
the cycles.  It's the expression parser I generated with John
Aycock's SPARK toolkit -- that's taking up an average of 26 seconds
out of an average 28-second runtime.

While I was at PC9 last week somebody mumbled something about Aycock's
code being cubic in time.  I should have heard ominous Jaws-style
theme music at that point, because that damn Earley-algorithm parser
has just swum up from the deeps and bitten me on the ass.

Looks like I'm going to have to hand-code an expression parser for
this puppy to speed it up at all.  *groan*  Anybody over on the Python
side know of a faster alternative LL or LR(1) parser generator or
factory class?
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

It will be of little avail to the people, that the laws are made by
men of their own choice, if the laws be so voluminous that they cannot
be read, or so incoherent that they cannot be understood; if they be
repealed or revised before they are promulgated, or undergo such
incessant changes that no man, who knows what the law is to-day, can
guess what it will be to-morrow. Law is defined to be a rule of
action; but how can that be a rule, which is little known, and less
fixed?
	-- James Madison, Federalist Papers 62



From guido at digicool.com  Tue Mar 13 00:32:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:32:37 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: Your message of "Mon, 12 Mar 2001 22:04:31 +0100."
             <20010312220425.T404@xs4all.nl> 
References: <20010312220425.T404@xs4all.nl> 
Message-ID: <200103122332.SAA22948@cj20424-a.reston1.va.home.com>

> Contrary to Guido's keynote last week <wink> there are still two warts I
> know of in the current CPython. One is the fact that keywords cannot be used
> as identifiers anywhere, the other is the fact that 'continue' can still not
> be used inside a 'finally' clause. If I remember correctly, the latter isn't
> too hard to fix, it just needs a decision on what it should do :)
> 
> Currently, falling out of a 'finally' block will reraise the exception, if
> any. Using 'return' and 'break' will drop the exception and continue on as
> usual. However, that makes sense (imho) mostly because 'break' will continue
> past the try/finally block and 'return' will break out of the function
> altogether. Neither have a chance of reentering the try/finally block
> altogether. I'm not sure if that would make sense for 'continue' inside
> 'finally'.
> 
> On the other hand, I'm not sure if it makes sense for 'break' to continue
> but for 'continue' to break. :)

If you can fix it, the semantics you suggest are reasonable: continue
loses the exception and continues the loop.

> As for the other wart, I still want to fix it, but I'm not sure when I get
> the chance to grok the parser-generator enough to actually do it :) 

Yes, that was on the list once but got dropped.  You might want to get
together with Finn and Samuele to see what their rules are.  (They
allow the use of some keywords at least as keyword=expression
arguments and as object.attribute names.)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 13 00:41:01 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:41:01 -0500
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: Your message of "Mon, 12 Mar 2001 18:15:15 EST."
             <15021.22659.616556.298360@anthem.wooz.org> 
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain>  
            <15021.22659.616556.298360@anthem.wooz.org> 
Message-ID: <200103122341.SAA23054@cj20424-a.reston1.va.home.com>

> >>>>> "UO" == Uche Ogbuji <uche.ogbuji at fourthought.com> writes:
> 
>     UO> I know this isn't the types SIG and all, but since it has come
>     UO> up here, I'd like to (once again) express my violent
>     UO> disagreement with the efforts to add static typing to Python.
>     UO> After this, I won't pursue the thread further here.
> 
> Thank you Uche!  I couldn't agree more, and will also try to follow
> your example, at least until we see much more concrete proposals from
> the types-sig.  I just want to make a few comments for the record.

Barry, you were supposed to throw a brick at me with this content at
the meeting, on Eric's behalf.  Why didn't you?  I was waiting for
someone to explain why this was a big idea, but everybody kept their
face shut!  :-(

> First, it seemed to me that the greatest push for static type
> annotations at IPC9 was from the folks implementing Python on top of
> frameworks other than C.  I know from my own experiences that there is
> the allure of improved performance, e.g. JPython, given type hints
> available to the compiler.  While perhaps a laudable goal, this
> doesn't seem to be a stated top priority of Paul's.
> 
> Second, if type annotations are to be seriously considered for
> inclusion in Python, I think we as a community need considerable
> experience with a working implementation.  Yes, we need PEPs and specs
> and such, but we need something real and complete that we can play
> with, /without/ having to commit to its acceptance in mainstream
> Python.  Therefore, I think it'll be very important for type
> annotation proponents to figure out a way to allow people to see and
> play with an implementation in an experimental way.

+1

> This might mean an extensive set of patches, a la Stackless.  After
> seeing and talking to Neil and Andrew about PTL and Quixote, I think
> there might be another way.  It seems that their approach might serve
> as a framework for experimental Python syntaxes with minimal overhead.
> If I understand their work correctly, they have their own compiler
> which is built on Jeremy's tools, and which accepts a modified Python
> grammar, generating different but compatible bytecode sequences.
> E.g., their syntax has a "template" keyword approximately equivalent
> to "def" and they do something different with bare strings left on the
> stack.

I'm not sure this is viable.  I believe Jeremy's compiler package
actually doesn't have its own parser -- it uses the parser module
(which invokes Python's standard parse) and then transmogrifies the
parse tree into something more usable, but it doesn't change the
syntax!  Quixote can get away with this because their only change
is giving a different meaning to stand-alone string literals.  But for
type annotations this doesn't give enough freedom, I expect.

> The key trick is that it all hooks together with an import hook so
> normal Python code doesn't need to know anything about the mechanics
> of PTL compilation.  Given a homepage.ptl file, they just do an
> "import homepage" and this gets magically transformed into a .ptlc
> file and normal Python objects.

That would be nice, indeed.

> If I've got this correct, it seems like it would be a powerful tool
> for playing with alternative Python syntaxes.  Ideally, the same
> technique would allow the types-sig folks to create a working
> implementation that would require only the installation of an import
> hook.  This would let them build their systems with type annotation
> and prove to the skeptical among us of their overwhelming benefit.

+1

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Tue Mar 13 00:47:14 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 00:47:14 +0100
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:19:39PM -0500
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <20010313004714.U404@xs4all.nl>

On Mon, Mar 12, 2001 at 06:19:39PM -0500, Guido van Rossum wrote:

> One thing we *could* agree to [at lunch], after I pressed
> some people: 1/2 should return 0.5. Possibly 1/2 should not be a
> binary floating point number -- but then 0.5 shouldn't either, and
> whatever happens, these (1/2 and 0.5) should have the same type, be it
> rational, binary float, or decimal float.

Actually, I didn't quite agree, and still don't quite agree (I'm just not
happy with this 'automatic upgrading of types') but I did agreed to differ
in opinion and bow to your wishes ;) I did agree that if 1/2 should not
return 0, it should return 0.5 (an object of the same type as
0.5-the-literal.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Tue Mar 13 00:48:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 12 Mar 2001 18:48:00 -0500
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: Your message of "Mon, 12 Mar 2001 18:41:01 EST."
             <200103122341.SAA23054@cj20424-a.reston1.va.home.com> 
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org>  
            <200103122341.SAA23054@cj20424-a.reston1.va.home.com> 
Message-ID: <200103122348.SAA23123@cj20424-a.reston1.va.home.com>

> Barry, you were supposed to throw a brick at me with this content at
> the meeting, on Eric's behalf.  Why didn't you?  I was waiting for
> someone to explain why this was a big idea, but everybody kept their
                                    ^^^^^^^^
> face shut!  :-(

/big idea/ -> /bad idea/ :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Tue Mar 13 00:48:21 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:48:21 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl>
	<200103122332.SAA22948@cj20424-a.reston1.va.home.com>
Message-ID: <15021.24645.357064.856281@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> Yes, that was on the list once but got dropped.  You might
    GvR> want to get together with Finn and Samuele to see what their
    GvR> rules are.  (They allow the use of some keywords at least as
    GvR> keyword=expression arguments and as object.attribute names.)

I'm actually a little surprised that the "Jython vs. CPython"
differences page doesn't describe this (or am I missing it?):

    http://www.jython.org/docs/differences.html

I thought it used to.

IIRC, keywords were allowed if there was no question of it introducing
a statement.  So yes, keywords were allowed after the dot in attribute
lookups, and as keywords in argument lists, but not as variable names
on the lhs of an assignment (I don't remember if they were legal on
the rhs, but it seems like that ought to be okay, and is actually
necessary if you allow them argument lists).

It would eliminate much of the need for writing obfuscated code like
"class_" or "klass".

-Barry



From barry at digicool.com  Tue Mar 13 00:52:57 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 18:52:57 -0500
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
	<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103120711.AAA09711@localhost.localdomain>
	<15021.22659.616556.298360@anthem.wooz.org>
	<200103122341.SAA23054@cj20424-a.reston1.va.home.com>
Message-ID: <15021.24921.998693.156809@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> Barry, you were supposed to throw a brick at me with this
    GvR> content at the meeting, on Eric's behalf.  Why didn't you?  I
    GvR> was waiting for someone to explain why this was a big idea,
    GvR> but everybody kept their face shut!  :-(

I actually thought I had, but maybe it was a brick made of bouncy spam
instead of concrete. :/

    GvR> I'm not sure this is viable.  I believe Jeremy's compiler
    GvR> package actually doesn't have its own parser -- it uses the
    GvR> parser module (which invokes Python's standard parse) and
    GvR> then transmogrifies the parse tree into something more
    GvR> usable, but it doesn't change the syntax!  Quixote can get
    GvR> away with this because their only change is giving a
    GvR> different meaning to stand-alone string literals.  But for
    GvR> type annotations this doesn't give enough freedom, I expect.

I thought PTL definitely included a "template" declaration keyword, a
la, def, so they must have some solution here.  MEMs guys?

-Barry



From thomas at xs4all.net  Tue Mar 13 01:01:45 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 01:01:45 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15021.24645.357064.856281@anthem.wooz.org>; from barry@digicool.com on Mon, Mar 12, 2001 at 06:48:21PM -0500
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org>
Message-ID: <20010313010145.V404@xs4all.nl>

On Mon, Mar 12, 2001 at 06:48:21PM -0500, Barry A. Warsaw wrote:
> >>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

>     GvR> Yes, that was on the list once but got dropped.  You might
>     GvR> want to get together with Finn and Samuele to see what their
>     GvR> rules are.  (They allow the use of some keywords at least as
>     GvR> keyword=expression arguments and as object.attribute names.)

> I'm actually a little surprised that the "Jython vs. CPython"
> differences page doesn't describe this (or am I missing it?):

Nope, it's not in there. It should be under the Syntax heading.

>     http://www.jython.org/docs/differences.html

Funnily enough:

"Jython supports continue in a try clause. CPython should be fixed - but
don't hold your breath."

It should be updated for CPython 2.1 when it's released ? :-)

[*snip* how Barry thinks he remembers how Jython might handle keywords]

> It would eliminate much of the need for writing obfuscated code like
> "class_" or "klass".

Yup. That's one of the reasons I brought it up. (That, and Mark mentioned
it's actually necessary for .NET Python to adhere to 'the spec'.)

Holding-my-breath-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nas at arctrix.com  Tue Mar 13 01:07:30 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 12 Mar 2001 16:07:30 -0800
Subject: [Python-Dev] parsers and import hooks [Was: Revive the types sig?]
In-Reply-To: <200103122341.SAA23054@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:41:01PM -0500
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org> <200103122341.SAA23054@cj20424-a.reston1.va.home.com>
Message-ID: <20010312160729.A2976@glacier.fnational.com>

[Recipient addresses brutally slashed.]

On Mon, Mar 12, 2001 at 06:41:01PM -0500, Guido van Rossum wrote:
> I'm not sure this is viable.  I believe Jeremy's compiler package
> actually doesn't have its own parser -- it uses the parser module
> (which invokes Python's standard parse) and then transmogrifies the
> parse tree into something more usable, but it doesn't change the
> syntax!

Yup.  Having a more flexible Python-like parser would be cool but
I don't think I'd ever try to implement it.  I know Christian
Tismer wants one.  Maybe he will volunteer. :-)

[On using import hooks to load modules with modified syntax/semantics]
> That would be nice, indeed.

Its nice if you can get it to work.  import hooks are a bitch to
write and are slow.  Also, you get trackbacks from hell.  It
would be nice if there were higher level hooks in the
interpreter.  imputil.py did no do the trick for me after
wrestling with it for hours.

  Neil



From nkauer at users.sourceforge.net  Tue Mar 13 01:09:10 2001
From: nkauer at users.sourceforge.net (Nikolas Kauer)
Date: Mon, 12 Mar 2001 18:09:10 -0600 (CST)
Subject: [Python-Dev] syntax exploration tool
In-Reply-To: <15021.22659.616556.298360@anthem.wooz.org>
Message-ID: <Pine.LNX.4.10.10103121801530.7351-100000@falcon.physics.wisc.edu>

I'd volunteer to put in time and help create such a tool.  If someone 
sufficiently knowledgeable decides to go ahead with such a project 
please let me know.

---
Nikolas Kauer <nkauer at users.sourceforge.net>

> Second, if type annotations are to be seriously considered for
> inclusion in Python, I think we as a community need considerable
> experience with a working implementation.  Yes, we need PEPs and specs
> and such, but we need something real and complete that we can play
> with, /without/ having to commit to its acceptance in mainstream
> Python.  Therefore, I think it'll be very important for type
> annotation proponents to figure out a way to allow people to see and
> play with an implementation in an experimental way.




From nas at arctrix.com  Tue Mar 13 01:13:04 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 12 Mar 2001 16:13:04 -0800
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <15021.24921.998693.156809@anthem.wooz.org>; from barry@digicool.com on Mon, Mar 12, 2001 at 06:52:57PM -0500
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org> <200103122341.SAA23054@cj20424-a.reston1.va.home.com> <15021.24921.998693.156809@anthem.wooz.org>
Message-ID: <20010312161304.B2976@glacier.fnational.com>

On Mon, Mar 12, 2001 at 06:52:57PM -0500, Barry A. Warsaw wrote:
> I thought PTL definitely included a "template" declaration keyword, a
> la, def, so they must have some solution here.  MEMs guys?

The correct term is "hack".  We do a re.sub on the text of the
module.  I considered building a new parsermodule with def
changed to template but haven't had time yet.  I think the
dominate cost when importing a PTL module is due stat() calls
driven by hairy Python code.

  Neil



From jeremy at alum.mit.edu  Tue Mar 13 01:14:47 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 19:14:47 -0500 (EST)
Subject: [Python-Dev] comments on PEP 219
Message-ID: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>

Here are some comments on Gordon's new draft of PEP 219 and the
stackless dev day discussion at Spam 9.

I left the dev day discussion with the following takehome message:
There is a tension between Stackless Python on the one hand and making
Python easy to embed in and extend with C programs on the other hand.
The PEP describes this as the major difficulty with C Python.  I won't
repeat the discussion of the problem there.

I would like to seem a somewhat more detailed discussion of this in
the PEP.  I think it's an important issue to work out before making a
decision about a stack-light patch.

The problem of nested interpreters and the C API seems to come up in
several ways.  These are all touched on in the PEP, but not in much
detail.  This message is mostly a request for more detail :-).

  - Stackless disallows transfer out of a nested interpreter.  (It
    has, too; anything else would be insane.)  Therefore, the
    specification for microthreads &c. will be complicated by a
    listing of the places where control transfers are not possible.
    The PEP says this is not ideal, but not crippling.  I'd like to
    see an actual spec for where it's not allowed in pure Python.  It
    may not be crippling, but it may be a tremendous nuisance in
    practice; e.g. remember that __init__ calls create a critical
    section.

  - If an application makes use of C extensions that do create nested
    interpreters, they will make it even harder to figure out when
    Python code is executing in a nested interpreter.  For a large
    systems with several C extensions, this could be complicated.  I
    presume, therefore, that there will be a C API for playing nice
    with stackless.  I'd like to see a PEP that discusses what this C
    API would look like.

  - Would allow of the internal Python calls that create nested
    functions be replaced?  I'm thinking of things like
    PySequence_Fast() and the ternary_op() call in abstract.c.  How
    hard will it be to convert all these functions to be stackless?
    How many functions are affected?  And how many places are they
    called from?

  - What is the performance impact of adding the stackless patches?  I
    think Christian mentioned a 10% slowdown at dev day, which doesn't
    sound unreasonable.  Will reworking the entire interpreter to be
    stackless make that slowdown larger or smaller?

One other set of issues, that is sort-of out of bounds for this
particular PEP, is what control features do we want that can only be
implemented with stackless.  Can we implement generators or coroutines
efficiently without a stackless approach?

Jeremy



From aycock at csc.UVic.CA  Tue Mar 13 01:13:01 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Mon, 12 Mar 2001 16:13:01 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <200103130013.QAA13925@valdes.csc.UVic.CA>

| From esr at snark.thyrsus.com Mon Mar 12 15:14:33 2001
| It's the expression parser I generated with John
| Aycock's SPARK toolkit -- that's taking up an average of 26 seconds
| out of an average 28-second runtime.
|
| While I was at PC9 last week somebody mumbled something about Aycock's
| code being cubic in time.  I should have heard ominous Jaws-style
| theme music at that point, because that damn Earley-algorithm parser
| has just swum up from the deeps and bitten me on the ass.

Eric:

You were partially correctly informed.  The time complexity of Earley's
algorithm is O(n^3) in the worst case, that being the meanest, nastiest,
most ambiguous context-free grammar you could possibly think of.  Unless
you're parsing natural language, this won't happen.  For any unambiguous
grammar, the worst case drops to O(n^2), and for a set of grammars which
loosely coincides with the LR(k) grammars, the complexity drops to O(n).

In other words, it's linear for most programming language grammars.  Now
the overhead for a general parsing algorithm like Earley's is of course
greater than that of a much more specialized algorithm, like LALR(1).

The next version of SPARK uses some of my research work into Earley's
algorithm and improves the speed quite dramatically.  It's not all
ready to go yet, but I can send you my working version which will give
you some idea of how fast it'll be for CML2.  Also, I assume you're
supplying a typestring() method to the parser class?  That speeds things
up as well.

John



From jepler at inetnebr.com  Tue Mar 13 00:38:42 2001
From: jepler at inetnebr.com (Jeff Epler)
Date: Mon, 12 Mar 2001 17:38:42 -0600
Subject: [Python-Dev] Revive the types sig?
In-Reply-To: <15021.22659.616556.298360@anthem.wooz.org>
References: <jeremy@alum.mit.edu> <15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net> <200103120711.AAA09711@localhost.localdomain> <15021.22659.616556.298360@anthem.wooz.org>
Message-ID: <20010312173842.A3962@potty.housenet>

On Mon, Mar 12, 2001 at 06:15:15PM -0500, Barry A. Warsaw wrote:
> This might mean an extensive set of patches, a la Stackless.  After
> seeing and talking to Neil and Andrew about PTL and Quixote, I think
> there might be another way.  It seems that their approach might serve
> as a framework for experimental Python syntaxes with minimal overhead.
> If I understand their work correctly, they have their own compiler
> which is built on Jeremy's tools, and which accepts a modified Python
> grammar, generating different but compatible bytecode sequences.
> E.g., their syntax has a "template" keyword approximately equivalent
> to "def" and they do something different with bare strings left on the
> stack.

See also my project, "M?bius python".[1]

I've used a lot of existing pieces, including the SPARK toolkit,
Tools/compiler, and Lib/tokenize.py.

The end result is a set of Python classes and functions that implement the
whole tokenize/parse/build AST/bytecompile process.  To the extent that
each component is modifable or subclassable, Python's grammar and semantics
can be extended.  For example, new keywords and statement types can be
introduced (such as Quixote's 'tmpl'), new operators can be introduced
(such as |absolute value|), along with the associated semantics.

(At this time, there is only a limited potential to modify the tokenizer)

One big problem right now is that M?bius Python only implements the
1.5.2 language subset.

The CVS tree on sourceforge is not up to date, but the tree on my system is
pretty complete, lacking only documentation.  Unfortunately, even a small
modification requires a fair amount of code (My 'absolute value' extension
is 91 lines plus comments, empty lines, and imports)

As far as I know, all that Quixote does at the syntax level is a few
regular expression tricks.  M?bius Python is much more than this.

Jeff
[1] http://mobiuspython.sourceforge.net/



From tim.one at home.com  Tue Mar 13 02:14:34 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 20:14:34 -0500
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <LNBBLJKPBEHFEDALKOLCIEDLJFAA.tim.one@home.com>

FYI, Fredrik's regexp engine also supports two undocumented match-object
attributes that could be used to speed SPARK lexing, and especially when
there are many token types (gives a direct index to the matching alternative
instead of making you do a linear search for it -- that can add up to a major
win).  Simple example below.

Python-Dev, this has been in there since 2.0 (1.6?  unsure).  I've been using
it happily all along.  If Fredrik is agreeable, I'd like to see this
documented for 2.1, i.e. made an officially supported part of Python's regexp
facilities.

-----Original Message-----
From: Tim Peters [mailto:tim.one at home.com]
Sent: Monday, March 12, 2001 6:37 PM
To: python-list at python.org
Subject: RE: Help with Regular Expressions

[Raymond Hettinger]
> Is there an idiom for how to use regular expressions for lexing?
>
> My attempt below is unsatisfactory because it has to filter the
> entire match group dictionary to find-out which token caused
> the match. This approach isn't scalable because every token
> match will require a loop over all possible token types.
>
> I've fiddled with this one for hours and can't seem to find a
> direct way get a group dictionary that contains only matches.

That's because there isn't a direct way; best you can do now is seek to order
your alternatives most-likely first (which is a good idea anyway, given the
way the engine works).

If you peek inside sre.py (2.0 or later), you'll find an undocumented class
Scanner that uses the undocumented .lastindex attribute of match objects.
Someday I hope this will be the basis for solving exactly the problem you're
facing.  There's also an undocumented .lastgroup attribute:

Python 2.1b1 (#11, Mar  2 2001, 11:23:29) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
IDLE 0.6 -- press F1 for help
>>> import re
>>> pat = re.compile(r"(?P<a>aa)|(?P<b>bb)")
>>> m = pat.search("baab")
>>> m.lastindex  # numeral of group that matched
1
>>> m.lastgroup  # name of group that matched
'a'
>>> m = pat.search("ababba")
>>> m.lastindex
2
>>> m.lastgroup
'b'
>>>

They're not documented yet because we're not yet sure whether we want to make
them permanent parts of the language.  So feel free to play, but don't count
on them staying around forever.  If you like them, drop a note to the effbot
saying so.

for-more-docs-read-the-source-code-ly y'rs  - tim




From paulp at ActiveState.com  Tue Mar 13 02:45:51 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 17:45:51 -0800
Subject: [Python-Dev] FOLLOWUPS!!!!!!!
References: <Pine.LNX.4.10.10103121801530.7351-100000@falcon.physics.wisc.edu>
Message-ID: <3AAD7BCF.4D4F69B7@ActiveState.com>

Please keep follow-ups to just types-sig. I'm very sorry I cross-posted
in the beginning and I apologize to everyone on multiple lists. I did
direct people to follow up only to types-sig but I should have used a
header....or separate posts!

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From ping at lfw.org  Tue Mar 13 02:56:27 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 12 Mar 2001 17:56:27 -0800 (PST)
Subject: [Python-Dev] parsers and import hooks
In-Reply-To: <20010312160729.A2976@glacier.fnational.com>
Message-ID: <Pine.LNX.4.10.10103121755110.13108-100000@skuld.kingmanhall.org>

On Mon, 12 Mar 2001, Neil Schemenauer wrote:
> 
> Its nice if you can get it to work.  import hooks are a bitch to
> write and are slow.  Also, you get trackbacks from hell.  It
> would be nice if there were higher level hooks in the
> interpreter.

Let me chime in with a request, please, for a higher-level find_module()
that understands packages -- or is there already some way to emulate the 
file-finding behaviour of "import x.y.z" that i don't know about?



-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From tim.one at home.com  Tue Mar 13 03:07:46 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 21:07:46 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: <20010312164705.C641@devserv.devel.redhat.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>

[Matt Wilson]
> We've been auditing various code lately to check for /tmp races and so
> on.  It seems that tempfile.mktemp() is used throughout the Python
> library.  While nice and portable, tempfile.mktemp() is vulnerable to
> races.
> ...

Adding to what Guido said, the 2.1 mktemp() finally bites the bullet and uses
a mutex to ensure that no two threads (within a process) can ever generate
the same filename.  The 2.0 mktemp() was indeed subject to races in this
respect.  Freedom from cross-process races relies on using the pid in the
filename too.




From paulp at ActiveState.com  Tue Mar 13 03:18:13 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 18:18:13 -0800
Subject: [Python-Dev] CML2 compiler slowness
References: <200103122336.f2CNa0W28998@snark.thyrsus.com>
Message-ID: <3AAD8365.285CCCFE@ActiveState.com>

"Eric S. Raymond" wrote:
> 
> ...
> 
> Looks like I'm going to have to hand-code an expression parser for
> this puppy to speed it up at all.  *groan*  Anybody over on the Python
> side know of a faster alternative LL or LR(1) parser generator or
> factory class?

I tried to warn you about those Early-parsers. :)

  http://mail.python.org/pipermail/python-dev/2000-July/005321.html


Here are some pointers to other solutions:

Martel: http://www.biopython.org/~dalke/Martel

flex/bison: http://www.cs.utexas.edu/users/mcguire/software/fbmodule/

kwparsing: http://www.chordate.com/kwParsing/

mxTextTools: http://www.lemburg.com/files/python/mxTextTools.html

metalang: http://www.tibsnjoan.demon.co.uk/mxtext/Metalang.html

plex: http://www.cosc.canterbury.ac.nz/~greg/python/Plex/

pylr: http://starship.python.net/crew/scott/PyLR.html

SimpleParse: (offline?)

mcf tools: (offline?)

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From thomas at xs4all.net  Tue Mar 13 03:23:02 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 03:23:02 +0100
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Include frameobject.h,2.30,2.31
In-Reply-To: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>; from jhylton@usw-pr-web.sourceforge.net on Mon, Mar 12, 2001 at 05:58:23PM -0800
References: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <20010313032302.W404@xs4all.nl>

On Mon, Mar 12, 2001 at 05:58:23PM -0800, Jeremy Hylton wrote:
> Modified Files:
> 	frameobject.h 
> Log Message:

> There is also a C API change: PyFrame_New() is reverting to its
> pre-2.1 signature.  The change introduced by nested scopes was a
> mistake.  XXX Is this okay between beta releases?

It is definately fine by me ;-) And Guido's reason for not caring about it
breaking ("noone uses it") applies equally well to unbreaking it between
beta releases.

Backward-bigot-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From paulp at ActiveState.com  Tue Mar 13 04:01:14 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 19:01:14 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
References: <200103130013.QAA13925@valdes.csc.UVic.CA>
Message-ID: <3AAD8D7A.3634BC56@ActiveState.com>

John Aycock wrote:
> 
> ...
> 
> For any unambiguous
> grammar, the worst case drops to O(n^2), and for a set of grammars 
> which loosely coincides with the LR(k) grammars, the complexity drops 
> to O(n).

I'd say: "it's linear for optimal grammars for most programming
languages." But it doesn't warn you when you are making a "bad grammar"
(not LR(k)) so things just slow down as you add rules...

Is there a tutorial about how to make fast Spark grammars or should I go
back and re-read my compiler construction books?

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From barry at digicool.com  Tue Mar 13 03:56:42 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Mon, 12 Mar 2001 21:56:42 -0500
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
	<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103120711.AAA09711@localhost.localdomain>
	<15021.22659.616556.298360@anthem.wooz.org>
	<200103122341.SAA23054@cj20424-a.reston1.va.home.com>
	<15021.24921.998693.156809@anthem.wooz.org>
	<20010312161304.B2976@glacier.fnational.com>
Message-ID: <15021.35946.606279.267593@anthem.wooz.org>

>>>>> "NS" == Neil Schemenauer <nas at arctrix.com> writes:

    >> I thought PTL definitely included a "template" declaration
    >> keyword, a la, def, so they must have some solution here.  MEMs
    >> guys?

    NS> The correct term is "hack".  We do a re.sub on the text of the
    NS> module.  I considered building a new parsermodule with def
    NS> changed to template but haven't had time yet.  I think the
    NS> dominate cost when importing a PTL module is due stat() calls
    NS> driven by hairy Python code.

Ah, good to know, thanks.  I definitely think it would be A Cool Thing
if one could build a complete Python parser and compiler in Python.
Kind of along the lines of building the interpreter main loop in
Python as much as possible.  I know that /I'm/ not going to have any
time to contribute though (and others have more and better experience
in this area than I do).

-Barry



From paulp at ActiveState.com  Tue Mar 13 04:09:21 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 12 Mar 2001 19:09:21 -0800
Subject: [Python-Dev] Revive the types sig?
References: <jeremy@alum.mit.edu>
		<15020.9404.557943.164934@w221.z064000254.bwi-md.dsl.cnc.net>
		<200103120711.AAA09711@localhost.localdomain>
		<15021.22659.616556.298360@anthem.wooz.org>
		<200103122341.SAA23054@cj20424-a.reston1.va.home.com>
		<15021.24921.998693.156809@anthem.wooz.org>
		<20010312161304.B2976@glacier.fnational.com> <15021.35946.606279.267593@anthem.wooz.org>
Message-ID: <3AAD8F61.C61CAC85@ActiveState.com>

"Barry A. Warsaw" wrote:
> 
>...
> 
> Ah, good to know, thanks.  I definitely think it would be A Cool Thing
> if one could build a complete Python parser and compiler in Python.
> Kind of along the lines of building the interpreter main loop in
> Python as much as possible.  I know that /I'm/ not going to have any
> time to contribute though (and others have more and better experience
> in this area than I do).

I'm surprised that there are dozens of compiler compilers written in
Python but few people stepped forward to say that theirs supports Python
itself. mxTextTools has a Python parser...does anyone know how good it
is?

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From esr at thyrsus.com  Tue Mar 13 04:11:02 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Mon, 12 Mar 2001 22:11:02 -0500
Subject: [Python-Dev] Re: [kbuild-devel] Re: CML2 compiler slowness
In-Reply-To: <200103130013.QAA13925@valdes.csc.UVic.CA>; from aycock@csc.UVic.CA on Mon, Mar 12, 2001 at 04:13:01PM -0800
References: <200103130013.QAA13925@valdes.csc.UVic.CA>
Message-ID: <20010312221102.A31473@thyrsus.com>

John Aycock <aycock at csc.UVic.CA>:
> The next version of SPARK uses some of my research work into Earley's
> algorithm and improves the speed quite dramatically.  It's not all
> ready to go yet, but I can send you my working version which will give
> you some idea of how fast it'll be for CML2.

I'd like to see it.

>                                             Also, I assume you're
> supplying a typestring() method to the parser class?  That speeds things
> up as well.

I supplied one.  The expression parser promptly dropped from 92% of
the total compiler run time to 87%, a whole 5% of improvement.

To paraphrase a famous line from E.E. "Doc" Smith, "I could eat a handful
of chad and *puke* a faster parser than that..."
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

[W]hat country can preserve its liberties, if its rulers are not
warned from time to time that [the] people preserve the spirit of
resistance?  Let them take arms...The tree of liberty must be
refreshed from time to time, with the blood of patriots and tyrants.
	-- Thomas Jefferson, letter to Col. William S. Smith, 1787 



From msw at redhat.com  Tue Mar 13 04:08:42 2001
From: msw at redhat.com (Matt Wilson)
Date: Mon, 12 Mar 2001 22:08:42 -0500
Subject: [Python-Dev] Concerns about tempfile.mktemp()
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>; from tim.one@home.com on Mon, Mar 12, 2001 at 09:07:46PM -0500
References: <20010312164705.C641@devserv.devel.redhat.com> <LNBBLJKPBEHFEDALKOLCMEDMJFAA.tim.one@home.com>
Message-ID: <20010312220842.A14634@devserv.devel.redhat.com>

Right, but this isn't the problem that I'm describing.  Because mktemp
just return a "checked" filename, it is vulnerable to symlink attacks.
Python programs run as root have a small window of opportunity between
when mktemp checks for the existence of the temp file and when the
function calling mktemp actually uses it.

So, it's hostile out-of-process attacks I'm worrying about, and the
recent CVS changes don't address that.

Cheers,

Matt

On Mon, Mar 12, 2001 at 09:07:46PM -0500, Tim Peters wrote:
> 
> Adding to what Guido said, the 2.1 mktemp() finally bites the bullet and uses
> a mutex to ensure that no two threads (within a process) can ever generate
> the same filename.  The 2.0 mktemp() was indeed subject to races in this
> respect.  Freedom from cross-process races relies on using the pid in the
> filename too.



From tim.one at home.com  Tue Mar 13 04:40:28 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 12 Mar 2001 22:40:28 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com>

[Guido, to David Ascher]
> ...
> One thing we *could* agree to there, after I pressed some people: 1/2
> should return 0.5.

FWIW, in a show of hands at the devday session after you left, an obvious
majority said they did object to that 1/2 is 0 today.  This was bold in the
face of Paul Dubois's decibel-rich opposition <wink>.  There was no consensus
on what it *should* do instead, though.

> Possibly 1/2 should not be a binary floating point number -- but then
> 0.5 shouldn't either, and whatever happens, these (1/2 and 0.5) should
> have the same type, be it rational, binary float, or decimal float.

I don't know that imposing this formal simplicity is going to be a genuine
help, because the area it's addressing is inherently complex.  In such cases,
simplicity is bought at the cost of trying to wish away messy realities.
You're aiming for Python arithmetic that's about 5x simpler than Python
strings <0.7 wink>.

It rules out rationals because you already know how insisting on this rule
worked out in ABC (it didn't).

It rules out decimal floats because scientific users can't tolerate the
inefficiency of simulating arithmetic in software (software fp is at best
~10x slower than native fp, assuming expertly hand-optimized assembler
exploiting platform HW tricks), and aren't going to agree to stick physical
constants in strings to pass to some "BinaryFloat()" constructor.

That only leaves native HW floating-point, but you already know *that*
doesn't work for newbies either.

Presumably ABC used rationals because usability studies showed they worked
best (or didn't they test this?).  Presumably the TeachScheme! dialect of
Scheme uses rationals for the same reason.  Curiously, the latter behaves
differently depending on "language level":

> (define x (/ 2 3))
> x
2/3
> (+ x 0.5)
1.1666666666666665
>

That's what you get under the "Full Scheme" setting.  Under all other
settings (Beginning, Intermediate, and Advanced Student), you get this
instead:

> (define x (/ 2 3))
> x
2/3
> (+ x 0.5)
7/6
>

In those you have to tag 0.5 as being inexact in order to avoid having it
treated as ABC did (i.e., as an exact decimal rational):

> (+ x #i0.5)
#i1.1666666666666665
>

> (- (* .58 100) 58)   ; showing that .58 is treated as exact
0
> (- (* #i.58 100) 58) ; same IEEE result as Python when .58 tagged w/ #i
#i-7.105427357601002e-015
>

So that's their conclusion:  exact rationals are best for students at all
levels (apparently the same conclusion reached by ABC), but when you get to
the real world rationals are no longer a suitable meaning for fp literals
(apparently the same conclusion *I* reached from using ABC; 1/10 and 0.1 are
indeed very different beasts to me).

A hard question:  what if they're right?  That is, that you have to favor one
of newbies or experienced users at the cost of genuine harm to the other?




From aycock at csc.UVic.CA  Tue Mar 13 04:32:54 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Mon, 12 Mar 2001 19:32:54 -0800
Subject: [Python-Dev] Re: [kbuild-devel] Re: CML2 compiler slowness
Message-ID: <200103130332.TAA17222@valdes.csc.UVic.CA>

Eric the Poet <esr at thyrsus.com> writes:
| To paraphrase a famous line from E.E. "Doc" Smith, "I could eat a handful
| of chad and *puke* a faster parser than that..."

Indeed.  Very colorful.

I'm sending you the in-development version of SPARK in a separate
message.

John



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 13 07:06:13 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 13 Mar 2001 07:06:13 +0100
Subject: [Python-Dev] more Solaris extension grief
Message-ID: <200103130606.f2D66D803507@mira.informatik.hu-berlin.de>

gcc -shared  ./PyEnforcer.o  -L/home/gvwilson/cozumel/merlot/enforcer
-lenforcer -lopenssl -lstdc++  -o ./PyEnforcer.so

> Text relocation remains                         referenced
>    against symbol                  offset      in file
> istream type_info function          0x1c
> /usr/local/lib/gcc-lib/sparc-sun-solaris2.8/2.95.2/libstdc++.a(strstream.o)
> istream type_info function          0x18

> Has anyone seen this problem before?

Yes, there have been a number of SF bug reports on that, and proposals
to fix that. It's partly a policy issue, but I believe all these
patches have been wrong, as the problem is not in Python.

When you build a shared library, it ought to be
position-independent. If it is not, the linker will need to put
relocation instructions into the text segment, which means that the
text segment has to be writable. In turn, the text of the shared
library will not be demand-paged anymore, but copied into main memory
when the shared library is loaded. Therefore, gcc asks ld to issue an
error if non-PIC code is integrated into a shared object.

To have the compiler emit position-independent code, you need to pass
the -fPIC option when producing object files. You not only need to do
that for your own object files, but for the object files of all the
static libraries you are linking with. In your case, the static
library is libstdc++.a.

Please note that linking libstdc++.a statically not only means that
you lose position-independence; it also means that you end up with a
copy of libstdc++.a in each extension module that you link with it.
In turn, global objects defined in the library may be constructed
twice (I believe).

There are a number of solutions:

a) Build libstdc++ as a  shared library. This is done on Linux, so
   you don't get the error on Linux.

b) Build libstdc++.a using -fPIC. The gcc build process does not
   support such a configuration, so you'ld need to arrange that
   yourself.

c) Pass the -mimpure-text option to gcc when linking. That will make
   the text segment writable, and silence the linker.

There was one proposal that looks like it would work, but doesn't:

d) Instead of linking with -shared, link with -G. That forgets to link
   the shared library startup files (crtbeginS/crtendS) into the shared
   library, which in turn means that constructors of global objects will
   fail to work; it also does a number of other things incorrect.

Regards,
Martin



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 13 07:12:41 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 13 Mar 2001 07:12:41 +0100
Subject: [Python-Dev] CML2 compiler slowness
Message-ID: <200103130612.f2D6Cfa03574@mira.informatik.hu-berlin.de>

> Anybody over on the Python side know of a faster alternative LL or
> LR(1) parser generator or factory class?

I'm using Yapps (http://theory.stanford.edu/~amitp/Yapps/), and find
it quite convenient, and also sufficiently fast (it gives, together
with sre, a factor of two or three over a flex/bison solution of XPath
parsing). I've been using my own lexer (using sre), both to improve
speed and to deal with the subtleties (sp?) of XPath tokenization.  If
you can send me the grammar and some sample sentences, I can help
writing a Yapps parser (as I think Yapps is an under-used kit).

Again, this question is probably better asked on python-list than
python-dev...

Regards,
Martin



From trentm at ActiveState.com  Tue Mar 13 07:56:12 2001
From: trentm at ActiveState.com (Trent Mick)
Date: Mon, 12 Mar 2001 22:56:12 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103122319.SAA22854@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Mon, Mar 12, 2001 at 06:19:39PM -0500
References: <PLEJJNOHDIGGLDPOGPJJIEPLCNAA.DavidA@ActiveState.com> <200103122319.SAA22854@cj20424-a.reston1.va.home.com>
Message-ID: <20010312225612.H8460@ActiveState.com>

I just want to add that one of the main participants in the Numeric Coercion
session was Paul Dubois and I am not sure that he is on python-dev. He should
probably be in this dicussion.

Trent

-- 
Trent Mick
TrentM at ActiveState.com



From guido at digicool.com  Tue Mar 13 10:58:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 04:58:32 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Include frameobject.h,2.30,2.31
In-Reply-To: Your message of "Tue, 13 Mar 2001 03:23:02 +0100."
             <20010313032302.W404@xs4all.nl> 
References: <E14ce4t-0007wS-00@usw-pr-cvs1.sourceforge.net>  
            <20010313032302.W404@xs4all.nl> 
Message-ID: <200103130958.EAA29951@cj20424-a.reston1.va.home.com>

> On Mon, Mar 12, 2001 at 05:58:23PM -0800, Jeremy Hylton wrote:
> > Modified Files:
> > 	frameobject.h 
> > Log Message:
> 
> > There is also a C API change: PyFrame_New() is reverting to its
> > pre-2.1 signature.  The change introduced by nested scopes was a
> > mistake.  XXX Is this okay between beta releases?
> 
> It is definately fine by me ;-) And Guido's reason for not caring about it
> breaking ("noone uses it") applies equally well to unbreaking it between
> beta releases.

This is a good thing!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 13 11:18:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 05:18:35 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Mon, 12 Mar 2001 22:40:28 EST."
             <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> 
Message-ID: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>

> [Guido, to David Ascher]
> > ...
> > One thing we *could* agree to there, after I pressed some people: 1/2
> > should return 0.5.
> 
> FWIW, in a show of hands at the devday session after you left, an obvious
> majority said they did object to that 1/2 is 0 today.  This was bold in the
> face of Paul Dubois's decibel-rich opposition <wink>.  There was no consensus
> on what it *should* do instead, though.
> 
> > Possibly 1/2 should not be a binary floating point number -- but then
> > 0.5 shouldn't either, and whatever happens, these (1/2 and 0.5) should
> > have the same type, be it rational, binary float, or decimal float.
> 
> I don't know that imposing this formal simplicity is going to be a genuine
> help, because the area it's addressing is inherently complex.  In such cases,
> simplicity is bought at the cost of trying to wish away messy realities.
> You're aiming for Python arithmetic that's about 5x simpler than Python
> strings <0.7 wink>.
> 
> It rules out rationals because you already know how insisting on this rule
> worked out in ABC (it didn't).
> 
> It rules out decimal floats because scientific users can't tolerate the
> inefficiency of simulating arithmetic in software (software fp is at best
> ~10x slower than native fp, assuming expertly hand-optimized assembler
> exploiting platform HW tricks), and aren't going to agree to stick physical
> constants in strings to pass to some "BinaryFloat()" constructor.
> 
> That only leaves native HW floating-point, but you already know *that*
> doesn't work for newbies either.

I'd like to argue about that.  I think the extent to which HWFP
doesn't work for newbies is mostly related to the change we made in
2.0 where repr() (and hence the interactive prompt) show full
precision, leading to annoyances like repr(1.1) == '1.1000000000000001'.

I've noticed that the number of complaints I see about this went way
up after 2.0 was released.

I expect that most newbies don't use floating point in a fancy way,
and would never notice it if it was slightly off as long as the output
was rounded like it was before 2.0.

> Presumably ABC used rationals because usability studies showed they worked
> best (or didn't they test this?).

No, I think at best the usability studies showed that floating point
had problems that the ABC authors weren't able to clearly explain to
newbies.  There was never an experiment comparing FP to rationals.

> Presumably the TeachScheme! dialect of
> Scheme uses rationals for the same reason.

Probably for the same reasons.

> Curiously, the latter behaves
> differently depending on "language level":
> 
> > (define x (/ 2 3))
> > x
> 2/3
> > (+ x 0.5)
> 1.1666666666666665
> >
> 
> That's what you get under the "Full Scheme" setting.  Under all other
> settings (Beginning, Intermediate, and Advanced Student), you get this
> instead:
> 
> > (define x (/ 2 3))
> > x
> 2/3
> > (+ x 0.5)
> 7/6
> >
> 
> In those you have to tag 0.5 as being inexact in order to avoid having it
> treated as ABC did (i.e., as an exact decimal rational):
> 
> > (+ x #i0.5)
> #i1.1666666666666665
> >
> 
> > (- (* .58 100) 58)   ; showing that .58 is treated as exact
> 0
> > (- (* #i.58 100) 58) ; same IEEE result as Python when .58 tagged w/ #i
> #i-7.105427357601002e-015
> >
> 
> So that's their conclusion:  exact rationals are best for students at all
> levels (apparently the same conclusion reached by ABC), but when you get to
> the real world rationals are no longer a suitable meaning for fp literals
> (apparently the same conclusion *I* reached from using ABC; 1/10 and 0.1 are
> indeed very different beasts to me).

Another hard question: does that mean that 1 and 1.0 are also very
different beasts to you?  They weren't to the Alice users who started
this by expecting 1/4 to represent a quarter turn.

> A hard question:  what if they're right?  That is, that you have to favor one
> of newbies or experienced users at the cost of genuine harm to the other?

You know where I'm leaning...  I don't know that newbies are genuinely
hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
that it prints 1.1, and be happy; the persistent ones will try
1.1**2-1.21, ask for an explanation, and get a introduction to
floating point.  This *doesnt'* have to explain all the details, just
the two facts that you can lose precision and that 1.1 isn't
representable exactly in binary.  Only the latter should be new to
them.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Tue Mar 13 12:45:21 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Tue, 13 Mar 2001 03:45:21 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <3AAE0851.3B683941@ActiveState.com>

Guido van Rossum wrote:
> 
>...
> 
> You know where I'm leaning...  I don't know that newbies are genuinely
> hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
> that it prints 1.1, and be happy; the persistent ones will try
> 1.1**2-1.21, ask for an explanation, and get a introduction to
> floating point.  This *doesnt'* have to explain all the details, just
> the two facts that you can lose precision and that 1.1 isn't
> representable exactly in binary.  Only the latter should be new to
> them.

David Ascher suggested during the talk that comparisons of floats could
raise a warning unless you turned that warning off (which only
knowledgable people would do). I think that would go a long way to
helping them find and deal with serious floating point inaccuracies in
their code.

-- 
Python:
    Programming the way
    Guido
    indented it.
       - (originated with Skip Montanaro?)



From guido at digicool.com  Tue Mar 13 12:42:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 06:42:35 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: Your message of "Tue, 13 Mar 2001 03:45:21 PST."
             <3AAE0851.3B683941@ActiveState.com> 
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>  
            <3AAE0851.3B683941@ActiveState.com> 
Message-ID: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>

[me]
> > You know where I'm leaning...  I don't know that newbies are genuinely
> > hurt by FP.  If we do it right, the naive ones will try 11.0/10.0, see
> > that it prints 1.1, and be happy; the persistent ones will try
> > 1.1**2-1.21, ask for an explanation, and get a introduction to
> > floating point.  This *doesnt'* have to explain all the details, just
> > the two facts that you can lose precision and that 1.1 isn't
> > representable exactly in binary.  Only the latter should be new to
> > them.

[Paul]
> David Ascher suggested during the talk that comparisons of floats could
> raise a warning unless you turned that warning off (which only
> knowledgable people would do). I think that would go a long way to
> helping them find and deal with serious floating point inaccuracies in
> their code.

You mean only for == and !=, right?  This could easily be implemented
now that we have rich comparisons.  We should wait until 2.2 though --
we haven't clearly decided that this is the way we want to go.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Tue Mar 13 12:54:19 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 12:54:19 +0100
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Tue, Mar 13, 2001 at 05:18:35AM -0500
References: <LNBBLJKPBEHFEDALKOLCIEEAJFAA.tim.one@home.com> <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <20010313125418.A404@xs4all.nl>

On Tue, Mar 13, 2001 at 05:18:35AM -0500, Guido van Rossum wrote:

> I think the extent to which HWFP doesn't work for newbies is mostly
> related to the change we made in 2.0 where repr() (and hence the
> interactive prompt) show full precision, leading to annoyances like
> repr(1.1) == '1.1000000000000001'.
> 
> I've noticed that the number of complaints I see about this went way up
> after 2.0 was released.
> 
> I expect that most newbies don't use floating point in a fancy way, and
> would never notice it if it was slightly off as long as the output was
> rounded like it was before 2.0.

I suspect that the change in float.__repr__() did reduce the number of
suprises over something like this, though: (taken from a 1.5.2 interpreter)

>>> x = 1.000000000001
>>> x
1.0
>>> x == 1.0
0

If we go for the HWFP + loosened precision in printing you seem to prefer,
we should be concious about this, possibly raising a warning when comparing
floats in this way. (Or in any way at all ? Given that when you compare two
floats, you either didn't intend to, or your name is Tim or Moshe and you
would be just as happy writing the IEEE754 binary representation directly :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tismer at tismer.com  Tue Mar 13 14:29:53 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 14:29:53 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAE20D1.5D375ECB@tismer.com>

Ok, I'm adding some comments.

Jeremy Hylton wrote:
> 
> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome message:
> There is a tension between Stackless Python on the one hand and making
> Python easy to embed in and extend with C programs on the other hand.
> The PEP describes this as the major difficulty with C Python.  I won't
> repeat the discussion of the problem there.
> 
> I would like to seem a somewhat more detailed discussion of this in
> the PEP.  I think it's an important issue to work out before making a
> decision about a stack-light patch.
> 
> The problem of nested interpreters and the C API seems to come up in
> several ways.  These are all touched on in the PEP, but not in much
> detail.  This message is mostly a request for more detail :-).
> 
>   - Stackless disallows transfer out of a nested interpreter.  (It
>     has, too; anything else would be insane.)  Therefore, the
>     specification for microthreads &c. will be complicated by a
>     listing of the places where control transfers are not possible.

To be more precise: Stackless catches any attempt to transfer to a
frame that has been locked (is run) by an interpreter that is not
the topmost of the C stack. That's all. You might even run Microthreads
in the fifth interpreter recursion, and later return to other
(stalled) microthreads, if only this condition is met.

>     The PEP says this is not ideal, but not crippling.  I'd like to
>     see an actual spec for where it's not allowed in pure Python.  It
>     may not be crippling, but it may be a tremendous nuisance in
>     practice; e.g. remember that __init__ calls create a critical
>     section.

At the moment, *all* of the __xxx__ methods are restricted to stack-
like behavior. __init__ and __getitem__ should probably be the first
methods beyond Stack-lite, which should get extra treatment.

>   - If an application makes use of C extensions that do create nested
>     interpreters, they will make it even harder to figure out when
>     Python code is executing in a nested interpreter.  For a large
>     systems with several C extensions, this could be complicated.  I
>     presume, therefore, that there will be a C API for playing nice
>     with stackless.  I'd like to see a PEP that discusses what this C
>     API would look like.

Ok. I see the need for an interface for frames here.
An extension should be able to create a frame, together with
necessary local memory.
It appears to need two or three functions in the extension:
1) Preparation phase
   The extension provides an "interpreter" function which is in
   charge to handle this frame. The preparation phase puts a
   pointer to this function into the frame.
2) Execution phase
   The frame is run by the frame dispatcher, which calls the
   interpreter function.
   For every nested call into Python, the interpreter function
   needs to return with a special signal for the scheduler,
   that there is now a different frame to be scheduled.
   These notifications, and midifying the frame chain, should
   be hidden by API calls.
3) cleanup phase (necessary?)
   A finalization function may be (optionally) provided for
   the frame destructor.

>   - Would allow of the internal Python calls that create nested
>     functions be replaced?  I'm thinking of things like
>     PySequence_Fast() and the ternary_op() call in abstract.c.  How
>     hard will it be to convert all these functions to be stackless?

PySequence_Fast() calls back into PySequence_Tuple(). In the generic
sequence case, it calls 
       PyObject *item = (*m->sq_item)(v, i);

This call may now need to return to the frame dispatcher without
having its work done. But we cannot do this, because the current
API guarantees that this method will return either with a result
or an exception. This means, we can of course modify the interpreter
to deal with a third kind of state, but this would probably break
some existing extensions.
It was the reason why I didn't try to go further here: Whatever
is exposed to other code but Python itself might break by such
an extension, unless we find a way to distinguish *who* calls.
On the other hand, if we are really at a new Python,
incompatibility would be just ok, and the problem would vanish.

>     How many functions are affected?  And how many places are they
>     called from?

This needs more investigation.

>   - What is the performance impact of adding the stackless patches?  I
>     think Christian mentioned a 10% slowdown at dev day, which doesn't
>     sound unreasonable.  Will reworking the entire interpreter to be
>     stackless make that slowdown larger or smaller?

No, it is about 5 percent. My optimization gains about 15 percent,
which makes a win of 10 percent overall.
The speed loss seems to be related to extra initialization calls
for frames, and the somewhat more difficult parameter protocol.
The fact that recusions are turned into repetitive calls from
a scheduler seems to have no impact. In other words: Further
"stackless" versions of internal functions will probably not
produce another slowdown.
This matches the observation that the number of function calls
is nearly the same, whether recursion is used or stackless.
It is mainly the order of function calls that is changed.

> One other set of issues, that is sort-of out of bounds for this
> particular PEP, is what control features do we want that can only be
> implemented with stackless.  Can we implement generators or coroutines
> efficiently without a stackless approach?

For some limitated view of generators: Yes, absolutely. *)
For coroutines: For sure not.

*) generators which live in the context of the calling
function, like the stack-based generator implementation of
one of the first ICON implementations, I think.
That is, these generators cannot be re-used somewhere else.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From uche.ogbuji at fourthought.com  Tue Mar 13 15:47:17 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Tue, 13 Mar 2001 07:47:17 -0700
Subject: [Python-Dev] comments on PEP 219 
In-Reply-To: Message from Jeremy Hylton <jeremy@alum.mit.edu> 
   of "Mon, 12 Mar 2001 19:14:47 EST." <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103131447.HAA32016@localhost.localdomain>

> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome message:
> There is a tension between Stackless Python on the one hand and making
> Python easy to embed in and extend with C programs on the other hand.
> The PEP describes this as the major difficulty with C Python.  I won't
> repeat the discussion of the problem there.

You know, even though I would like to have some of the Stackless features, my 
skeptical reaction to some of the other Grand Ideas circulating at IPC9, 
including static types leads me to think I might not be thinking clearly on 
the Stackless question.

I think that if there is no way to address the many important concerns raised 
by people at the Stackless session (minus the "easy to learn" argument IMO), 
Stackless is probably a bad idea to shove into Python.

I still think that the Stackless execution structure would be a huge 
performance boost in many XML processing tasks, but that's not worth making 
Python intractable for extension writers.

Maybe it's not so bad for Stackless to remain a branch, given how closely 
Christian can work with Pythonlabs.  The main problem is the load on 
Christian, which would be mitigated as he gained collaborators.  The other 
problem would be that interested extension writers might need to maintain 2 
code-bases as well.  Maybe one could develop some sort of adaptor.

Or maybe Stackless should move to core, but only in P3K in which extension 
writers should be expecting weird and wonderful new models, anyway (right?)


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From tismer at tismer.com  Tue Mar 13 16:12:03 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 16:12:03 +0100
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
References: <200103131447.HAA32016@localhost.localdomain>
Message-ID: <3AAE38C3.2C9BAA08@tismer.com>


Uche Ogbuji wrote:
> 
> > Here are some comments on Gordon's new draft of PEP 219 and the
> > stackless dev day discussion at Spam 9.
> >
> > I left the dev day discussion with the following takehome message:
> > There is a tension between Stackless Python on the one hand and making
> > Python easy to embed in and extend with C programs on the other hand.
> > The PEP describes this as the major difficulty with C Python.  I won't
> > repeat the discussion of the problem there.
> 
> You know, even though I would like to have some of the Stackless features, my
> skeptical reaction to some of the other Grand Ideas circulating at IPC9,
> including static types leads me to think I might not be thinking clearly on
> the Stackless question.
> 
> I think that if there is no way to address the many important concerns raised
> by people at the Stackless session (minus the "easy to learn" argument IMO),
> Stackless is probably a bad idea to shove into Python.

Maybe I'm repeating myself, but I'd like to clarify:
I do not plan to introduce anything that forces anybody to change
her code. This is all about extending the current capabilities.

> I still think that the Stackless execution structure would be a huge
> performance boost in many XML processing tasks, but that's not worth making
> Python intractable for extension writers.

Extension writers only have to think about the Stackless
protocol (to be defined) if they want to play the Stackless
game. If this is not intended, this isn't all that bad. It only means
that they cannot switch a microthread while the extension does
a callback.
But that is all the same as today. So how could Stackless make
extensions intractable, unless someone *wants* to get get all of it?

An XML processor in C will not take advantage form Stackless unless
it is desinged for that. But nobody enforces this. Stackless can
behave as recursive as standard Python, and it is completely aware
about recursions. It will not break.

It is the programmers choice to make a switchable extension
or not. This is just more than today to choose.

> Maybe it's not so bad for Stackless to remain a branch, given how closely
> Christian can work with Pythonlabs.  The main problem is the load on
> Christian, which would be mitigated as he gained collaborators.  The other
> problem would be that interested extension writers might need to maintain 2
> code-bases as well.  Maybe one could develop some sort of adaptor.
> 
> Or maybe Stackless should move to core, but only in P3K in which extension
> writers should be expecting weird and wonderful new models, anyway (right?)

That's no alternative. Remember Guido's words:
P3K will never become reality. It is a virtual
place where to put all the things that might happen in some future.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From esr at snark.thyrsus.com  Tue Mar 13 16:32:51 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Tue, 13 Mar 2001 10:32:51 -0500
Subject: [Python-Dev] CML2 compiler speedup
Message-ID: <200103131532.f2DFWpw04691@snark.thyrsus.com>

I bit the bullet and hand-rolled a recursive-descent expression parser
for CML2 to replace the Earley-algorithm parser described in my
previous note.  It is a little more than twice as fast as the SPARK
code, cutting the CML2 compiler runtime almost exactly in half.

Sigh.  I had been intending to recommend SPARK for the Python standard
library -- as I pointed out in my PC9 paper, it would be the last
piece stock Python needs to be an effective workbench for
minilanguage construction.  Unfortunately I'm now convinced Paul
Prescod is right and it's too slow for production use, at least at
version 0.6.1.  

John Aycock says 0.7 will be substantially faster; I'll keep an eye on
this.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

The price of liberty is, always has been, and always will be blood.  The person
who is not willing to die for his liberty has already lost it to the first
scoundrel who is willing to risk dying to violate that person's liberty.  Are
you free? 
	-- Andrew Ford



From moshez at zadka.site.co.il  Tue Mar 13 07:20:47 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Tue, 13 Mar 2001 08:20:47 +0200
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
Message-ID: <E14ciAp-0005dJ-00@darjeeling>

After discussions in IPC9 one of the decisions was to set up a mailing
list for discussion of the numeric model of Python.

Subscribe here:

    http://lists.sourceforge.net/lists/listinfo/python-numerics

Or here:

    python-numerics-request at lists.sourceforge.net

I will post my PEPs there as soon as an initial checkin is completed.
Please direct all further numeric model discussion there.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From paul at pfdubois.com  Tue Mar 13 17:38:35 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Tue, 13 Mar 2001 08:38:35 -0800
Subject: [Python-Dev] Kinds
Message-ID: <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com>

I was asked to write down what I said at the dev day session about kinds. I
have put this in the form of a proposal-like writeup which is attached. I
hope this helps you undestand what I meant.

-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: kinds.txt
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010313/9f16e7f4/attachment-0001.txt>

From guido at digicool.com  Tue Mar 13 17:43:42 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 11:43:42 -0500
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: Your message of "Tue, 06 Mar 2001 07:51:49 CST."
             <15012.60277.150431.237935@beluga.mojam.com> 
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>  
            <15012.60277.150431.237935@beluga.mojam.com> 
Message-ID: <200103131643.LAA01072@cj20424-a.reston1.va.home.com>

> Two things come to mind.  One, perhaps a more careful coding of urllib to
> avoid exposing names it shouldn't export would be a better choice.  Two,
> perhaps those symbols that are not documented but that would be useful when
> extending urllib functionality should be documented and added to __all__.
> 
> Here are the non-module names I didn't include in urllib.__all__:

Let me annotate these in-line:

>     MAXFTPCACHE			No
>     localhost				Yes
>     thishost				Yes
>     ftperrors				Yes
>     noheaders				No
>     ftpwrapper			No
>     addbase				No
>     addclosehook			No
>     addinfo				No
>     addinfourl			No
>     basejoin				Yes
>     toBytes				No
>     unwrap				Yes
>     splittype				Yes
>     splithost				Yes
>     splituser				Yes
>     splitpasswd			Yes
>     splitport				Yes
>     splitnport			Yes
>     splitquery			Yes
>     splittag				Yes
>     splitattr				Yes
>     splitvalue			Yes
>     splitgophertype			Yes
>     always_safe			No
>     getproxies_environment		No
>     getproxies			Yes
>     getproxies_registry		No
>     test1				No
>     reporthook			No
>     test				No
>     main				No
> 
> None are documented, so there are no guarantees if you use them (I have
> subclassed addinfourl in the past myself).

Note that there's a comment block "documenting" all the split*()
functions, indicating that I intended them to be public.  For the
rest, I'm making a best guess based on how useful these things are and
how closely tied to the implementation etc.

--Guido van Rossum (home page: http://www.python.org/~guido/)




From jeremy at alum.mit.edu  Tue Mar 13 03:42:20 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 21:42:20 -0500 (EST)
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
In-Reply-To: <3AAE38C3.2C9BAA08@tismer.com>
References: <200103131447.HAA32016@localhost.localdomain>
	<3AAE38C3.2C9BAA08@tismer.com>
Message-ID: <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "CT" == Christian Tismer <tismer at tismer.com> writes:

  CT> Maybe I'm repeating myself, but I'd like to clarify: I do not
  CT> plan to introduce anything that forces anybody to change her
  CT> code. This is all about extending the current capabilities.

The problem with this position is that C code that uses the old APIs
interferes in odd ways with features that depend on stackless,
e.g. the __xxx__ methods.[*]  If the old APIs work but are not
compatible, we'll end up having to rewrite all our extensions so that
they play nicely with stackless.

If we change the core and standard extensions to use stackless
interfaces, then this style will become the standard style.  If the
interface is simple, this is no problem.  If the interface is complex,
it may be a problem.  My point is that if we change the core APIs, we
place a new burden on extension writers.

Jeremy

    [*] If we fix the type-class dichotomy, will it have any effect on
    the stackful nature of some of these C calls?



From jeremy at alum.mit.edu  Tue Mar 13 03:47:41 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 12 Mar 2001 21:47:41 -0500 (EST)
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: <E14ciAp-0005dJ-00@darjeeling>
References: <E14ciAp-0005dJ-00@darjeeling>
Message-ID: <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>

We've spun off a lot of new lists recently.  I don't particularly care
for this approach, because I sometimes feel like I spend more time
subscribing to new lists than I do actually reading them <0.8 wink>.

I assume that most people are relieved to have the traffic taken off
python-dev.  (I can't think of any other reason to create a separate
list.)  But what's the well-informed Python hacker to do?  Subscribe
to dozens of different lists to discuss each different feature?

A possible solution: python-dev-all at python.org.  This list would be
subscribed to each of the special topic mailing lists.  People could
subscribe to it to get all of the mail without having to individually
subscribe to all the sublists.  Would this work?

Jeremy



From barry at digicool.com  Tue Mar 13 18:12:19 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Tue, 13 Mar 2001 12:12:19 -0500
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
References: <E14ciAp-0005dJ-00@darjeeling>
	<15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15022.21747.94249.599599@anthem.wooz.org>

There was some discussions at IPC9 about implementing `topics' in
Mailman which I think would solve this problem nicely.  I don't have
time to go into much details now, and it's definitely a medium-term
solution (since other work is taking priority right now).

-Barry



From aycock at csc.UVic.CA  Tue Mar 13 17:54:48 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Tue, 13 Mar 2001 08:54:48 -0800
Subject: [Python-Dev] Re: CML2 compiler slowness
Message-ID: <200103131654.IAA22731@valdes.csc.UVic.CA>

| From paulp at ActiveState.com Mon Mar 12 18:39:28 2001
| Is there a tutorial about how to make fast Spark grammars or should I go
| back and re-read my compiler construction books?

My advice would be to avoid heavy use of obviously ambiguous
constructions, like defining expressions to be
	E ::= E op E

Aside from that, the whole point of SPARK is to have the language you're
implementing up and running, fast -- even if you don't have a lot of
background in compiler theory.  It's not intended to spit out blazingly
fast production compilers.  If the result isn't fast enough for your
purposes, then you can replace SPARK components with faster ones; you're
not locked in to using the whole package.  Or, if you're patient, you can
wait for the tool to improve :-)

John



From gmcm at hypernet.com  Tue Mar 13 18:17:39 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 12:17:39 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAE0FE3.2206.7AB85588@localhost>

[Jeremy]
> Here are some comments on Gordon's new draft of PEP 219 and the
> stackless dev day discussion at Spam 9.
> 
> I left the dev day discussion with the following takehome
> message: There is a tension between Stackless Python on the one
> hand and making Python easy to embed in and extend with C
> programs on the other hand. The PEP describes this as the major
> difficulty with C Python.  I won't repeat the discussion of the
> problem there.

Almost all of the discussion about interpreter recursions is 
about completeness, *not* about usability. If you were to 
examine all the Stackless using apps out there, I think you 
would find that they rely on a stackless version of only one 
builtin - apply().

I can think of 2 practical situations in which it would be *nice* 
to be rid of the recursion:

 - magic methods (__init__, __getitem__ and __getattr__ in 
particular). But magic methods are a convenience. There's 
absolutely nothing there that can't be done another way.

 - a GUI. Again, no big deal, because GUIs impose all kinds of 
restrictions to begin with. If you use a GUI with threads, you 
almost always have to dedicate one thread (usually the main 
one) to the GUI and be careful that the other threads don't 
touch the GUI directly. It's basically the same issue with 
Stackless.
 
As for the rest of the possible situations, demand is 
nonexistant. In an ideal world, we'd never have to answer the 
question "how come it didn't work?". But put on you 
application programmers hat for a moment and see if you can 
think of a legitimate reason for, eg, one of the objects in an 
__add__ wanting to make use of a pre-existing coroutine 
inside the __add__ call. [Yeah, Tm can come up with a 
reason, but I did say "legitimate".]

> I would like to seem a somewhat more detailed discussion of this
> in the PEP.  I think it's an important issue to work out before
> making a decision about a stack-light patch.

I'm not sure why you say that. The one comparable situation 
in normal Python is crossing threads in callbacks. With the 
exception of a couple of complete madmen (doing COM 
support), everyone else learns to avoid the situation. [Mark 
doesn't even claim to know *how* he solved the problem 
<wink>].
 
> The problem of nested interpreters and the C API seems to come up
> in several ways.  These are all touched on in the PEP, but not in
> much detail.  This message is mostly a request for more detail
> :-).
> 
>   - Stackless disallows transfer out of a nested interpreter. 
>   (It
>     has, too; anything else would be insane.)  Therefore, the
>     specification for microthreads &c. will be complicated by a
>     listing of the places where control transfers are not
>     possible. The PEP says this is not ideal, but not crippling. 
>     I'd like to see an actual spec for where it's not allowed in
>     pure Python.  It may not be crippling, but it may be a
>     tremendous nuisance in practice; e.g. remember that __init__
>     calls create a critical section.

The one instance I can find on the Stackless list (of 
attempting to use a continuation across interpreter 
invocations) was a call the uthread.wait() in __init__. Arguably 
a (minor) nuisance, arguably bad coding practice (even if it 
worked).

I encountered it when trying to make a generator work with a 
for loop. So you end up using a while loop <shrug>.

It's disallowed where ever it's not accomodated. Listing those 
cases is probably not terribly helpful; I bet even Guido is 
sometimes surprised at what actually happens under the 
covers. The message "attempt to run a locked frame" is not 
very meaningful to the Stackless newbie, however.
 
[Christian answered the others...]


- Gordon



From DavidA at ActiveState.com  Tue Mar 13 18:25:49 2001
From: DavidA at ActiveState.com (David Ascher)
Date: Tue, 13 Mar 2001 09:25:49 -0800
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>
Message-ID: <PLEJJNOHDIGGLDPOGPJJEEPNCNAA.DavidA@ActiveState.com>

GvR:

> [Paul]
> > David Ascher suggested during the talk that comparisons of floats could
> > raise a warning unless you turned that warning off (which only
> > knowledgable people would do). I think that would go a long way to
> > helping them find and deal with serious floating point inaccuracies in
> > their code.
>
> You mean only for == and !=, right?

Right.

> We should wait until 2.2 though --
> we haven't clearly decided that this is the way we want to go.

Sure.  It was just a suggestion for a way to address the inherent problems
in having newbies work w/ FP (where newbie in this case is 99.9% of the
programming population, IMO).

-david




From thomas at xs4all.net  Tue Mar 13 19:08:05 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Tue, 13 Mar 2001 19:08:05 +0100
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>; from jeremy@alum.mit.edu on Mon, Mar 12, 2001 at 09:47:41PM -0500
References: <E14ciAp-0005dJ-00@darjeeling> <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <20010313190805.C404@xs4all.nl>

On Mon, Mar 12, 2001 at 09:47:41PM -0500, Jeremy Hylton wrote:

> We've spun off a lot of new lists recently.  I don't particularly care
> for this approach, because I sometimes feel like I spend more time
> subscribing to new lists than I do actually reading them <0.8 wink>.

And even if they are seperate lists, people keep crossposting, completely
negating the idea behind seperate lists. ;P I think the main reason for
separate lists is to allow non-python-dev-ers easy access to the lists. 

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gmcm at hypernet.com  Tue Mar 13 19:29:56 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 13:29:56 -0500
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
In-Reply-To: <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE38C3.2C9BAA08@tismer.com>
Message-ID: <3AAE20D4.25660.7AFA8206@localhost>

> >>>>> "CT" == Christian Tismer <tismer at tismer.com> writes:
> 
>   CT> Maybe I'm repeating myself, but I'd like to clarify: I do
>   not CT> plan to introduce anything that forces anybody to
>   change her CT> code. This is all about extending the current
>   capabilities.

[Jeremy] 
> The problem with this position is that C code that uses the old
> APIs interferes in odd ways with features that depend on
> stackless, e.g. the __xxx__ methods.[*]  If the old APIs work but
> are not compatible, we'll end up having to rewrite all our
> extensions so that they play nicely with stackless.

I don't understand. Python code calls C extension. C 
extension calls Python callback which tries to use a pre-
existing coroutine. How is the "interference"? The callback 
exists only because the C extension has an API that uses 
callbacks. 

Well, OK, the callback doesn't have to be explicit. The C can 
go fumbling around in a passed in object and find something 
callable. But to call it "interference", I think you'd have to have 
a working program which stopped working when a C extension 
crept into it without the programmer noticing <wink>.

> If we change the core and standard extensions to use stackless
> interfaces, then this style will become the standard style.  If
> the interface is simple, this is no problem.  If the interface is
> complex, it may be a problem.  My point is that if we change the
> core APIs, we place a new burden on extension writers.

This is all *way* out of scope, but if you go the route of 
creating a pseudo-frame for the C code, it seems quite 
possible that the interface wouldn't have to change at all. We 
don't need any more args into PyEval_EvalCode. We don't 
need any more results out of it. Christian's stackless map 
implementation is proof-of-concept that you can do this stuff.

The issue (if and when we get around to "truly and completely 
stackless") is complexity for the Python internals 
programmer, not your typical object-wrapping / SWIG-swilling 
extension writer.


> Jeremy
> 
>     [*] If we fix the type-class dichotomy, will it have any
>     effect on the stackful nature of some of these C calls?

Don't know. What will those calls look like <wink>?

- Gordon



From jeremy at alum.mit.edu  Tue Mar 13 19:30:37 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 13:30:37 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <20010313185501.A7459@planck.physik.uni-konstanz.de>
References: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
	<3AAE0FE3.2206.7AB85588@localhost>
	<20010313185501.A7459@planck.physik.uni-konstanz.de>
Message-ID: <15022.26445.896017.406266@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "BR" == Bernd Rinn <Bernd.Rinn at epost.de> writes:

  BR> On Tue, Mar 13, 2001 at 12:17:39PM -0500, Gordon McMillan wrote:
  >> The one instance I can find on the Stackless list (of attempting
  >> to use a continuation across interpreter invocations) was a call
  >> the uthread.wait() in __init__. Arguably a (minor) nuisance,
  >> arguably bad coding practice (even if it worked).

[explanation of code practice that lead to error omitted]

  BR> So I suspect that you might end up with a rule of thumb:

  BR> """ Don't use classes and libraries that use classes when doing
  BR> IO in microthreaded programs!  """

  BR> which might indeed be a problem. Am I overlooking something
  BR> fundamental here?

Thanks for asking this question in a clear and direct way.

A few other variations on the question come to mind:

    If a programmer uses a library implement via coroutines, can she
    call library methods from an __xxx__ method?

    Can coroutines or microthreads co-exist with callbacks invoked by
    C extensions? 

    Can a program do any microthread IO in an __call__ method?

If any of these are the sort "in theory" problems that the PEP alludes
to, then we need a full spec for what is and is not allowed.  It
doesn't make sense to tell programmers to follow unspecified
"reasonable" programming practices.

Jeremy



From ping at lfw.org  Tue Mar 13 19:44:37 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 13 Mar 2001 10:44:37 -0800 (PST)
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <20010313125418.A404@xs4all.nl>
Message-ID: <Pine.LNX.4.10.10103131039260.13108-100000@skuld.kingmanhall.org>

On Tue, Mar 13, 2001 at 05:18:35AM -0500, Guido van Rossum wrote:
> I think the extent to which HWFP doesn't work for newbies is mostly
> related to the change we made in 2.0 where repr() (and hence the
> interactive prompt) show full precision, leading to annoyances like
> repr(1.1) == '1.1000000000000001'.

I'll argue now -- just as i argued back then, but louder! -- that
this isn't necessary.  repr(1.1) can be 1.1 without losing any precision.

Simply stated, you only need to display as many decimal places as are
necessary to regenerate the number.  So if x happens to be the
floating-point number closest to 1.1, then 1.1 is all you have to show.

By definition, if you type x = 1.1, x will get the floating-point
number closest in value to 1.1.  So x will print as 1.1.  And entering
1.1 will be sufficient to reproduce x exactly.

Thomas Wouters wrote:
> I suspect that the change in float.__repr__() did reduce the number of
> suprises over something like this, though: (taken from a 1.5.2 interpreter)
> 
> >>> x = 1.000000000001
> >>> x
> 1.0
> >>> x == 1.0
> 0

Stick in a

    warning: floating-point numbers should not be tested for equality

and that should help at least somewhat.

If you follow the rule i stated above, you would get this:

    >>> x = 1.1
    >>> x
    1.1
    >>> x == 1.1
    warning: floating-point numbers should not be tested for equality
    1
    >>> x = 1.000000000001
    >>> x
    1.0000000000010001
    >>> x == 1.000000000001
    warning: floating-point numbers should not be tested for equality
    1
    >>> x == 1.0
    warning: floating-point numbers should not be tested for equality
    0

All of this seems quite reasonable to me.



-- ?!ng

"Computers are useless.  They can only give you answers."
    -- Pablo Picasso




From skip at mojam.com  Tue Mar 13 20:48:15 2001
From: skip at mojam.com (Skip Montanaro)
Date: Tue, 13 Mar 2001 13:48:15 -0600 (CST)
Subject: [Python-Dev] __all__ in urllib
In-Reply-To: <200103131643.LAA01072@cj20424-a.reston1.va.home.com>
References: <20010306133113.0DDCD373C95@snelboot.oratrix.nl>
	<15012.60277.150431.237935@beluga.mojam.com>
	<200103131643.LAA01072@cj20424-a.reston1.va.home.com>
Message-ID: <15022.31103.7828.938707@beluga.mojam.com>

    Guido> Let me annotate these in-line:

    ...

I just added all the names marked "yes".

Skip



From gmcm at hypernet.com  Tue Mar 13 21:02:14 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 15:02:14 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.26445.896017.406266@w221.z064000254.bwi-md.dsl.cnc.net>
References: <20010313185501.A7459@planck.physik.uni-konstanz.de>
Message-ID: <3AAE3676.13712.7B4F001D@localhost>

Can we please get the followups under control? Bernd sent 
me a private email. I replied privately. Then he forwarded to 
Stackless. So I forwarded my reply to Stackless. Now Jeremy 
adds python-dev to the mix.

> >>>>> "BR" == Bernd Rinn <Bernd.Rinn at epost.de> writes:
> 
>   BR> On Tue, Mar 13, 2001 at 12:17:39PM -0500, Gordon McMillan
>   wrote: >> The one instance I can find on the Stackless list (of
>   attempting >> to use a continuation across interpreter
>   invocations) was a call >> the uthread.wait() in __init__.
>   Arguably a (minor) nuisance, >> arguably bad coding practice
>   (even if it worked).
> 
> [explanation of code practice that lead to error omitted]
> 
>   BR> So I suspect that you might end up with a rule of thumb:
> 
>   BR> """ Don't use classes and libraries that use classes when
>   doing BR> IO in microthreaded programs!  """
> 
>   BR> which might indeed be a problem. Am I overlooking something
>   BR> fundamental here?

Synopsis of my reply: this is more a problem with uthreads 
than coroutines. In any (real) thread, you're limited to dealing 
with one non-blocking IO technique (eg, select) without going 
into a busy loop. If you're dedicating a (real) thread to select, it 
makes more sense to use coroutines than uthreads.

> A few other variations on the question come to mind:
> 
>     If a programmer uses a library implement via coroutines, can
>     she call library methods from an __xxx__ method?

Certain situations won't work, but you knew that.
 
>     Can coroutines or microthreads co-exist with callbacks
>     invoked by C extensions? 

Again, in certain situations it won't work. Again, you knew that.
 
>     Can a program do any microthread IO in an __call__ method?

Considering you know the answer to that one too, you could've 
phrased it as a parsable question.
 
> If any of these are the sort "in theory" problems that the PEP
> alludes to, then we need a full spec for what is and is not
> allowed.  It doesn't make sense to tell programmers to follow
> unspecified "reasonable" programming practices.

That's easy. In a nested invocation of the Python interpreter, 
you can't use a coroutine created in an outer interpreter. 

In the Python 2 documentation, there are 6 caveats listed in 
the thread module. That's a couple order of magnitudes 
different from the actual number of ways you can screw up 
using the thread module.

- Gordon



From jeremy at alum.mit.edu  Tue Mar 13 21:22:36 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 15:22:36 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE3676.13712.7B4F001D@localhost>
References: <20010313185501.A7459@planck.physik.uni-konstanz.de>
	<3AAE3676.13712.7B4F001D@localhost>
Message-ID: <15022.33164.673632.351851@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GMcM" == Gordon McMillan <gmcm at hypernet.com> writes:

  GMcM> Can we please get the followups under control? Bernd sent me a
  GMcM> private email. I replied privately. Then he forwarded to
  GMcM> Stackless. So I forwarded my reply to Stackless. Now Jeremy
  GMcM> adds python-dev to the mix.

I had no idea what was going on with forwards and the like.  It looks
like someone "bounced" messages, i.e. sent a message to me or a list
I'm on without including me or the list in the to or cc fields.  So I
couldn't tell how I received the message!  So I restored the original
recipients list of the thread (you, stackless, python-dev).

  >> >>>>> "BR" == Bernd Rinn <Bernd.Rinn at epost.de> writes:
  >> A few other variations on the question come to mind:
  >>
  >> If a programmer uses a library implement via coroutines, can she
  >> call library methods from an __xxx__ method?

  GMcM> Certain situations won't work, but you knew that.

I expected that some won't work, but no one seems willing to tell me
exactly which ones will and which ones won't.  Should the caveat in
the documentation say "avoid using certain __xxx__ methods" <0.9
wink>. 
 
  >> Can coroutines or microthreads co-exist with callbacks invoked by
  >> C extensions?

  GMcM> Again, in certain situations it won't work. Again, you knew
  GMcM> that.

Wasn't sure.
 
  >> Can a program do any microthread IO in an __call__ method?

  GMcM> Considering you know the answer to that one too, you could've
  GMcM> phrased it as a parsable question.

Do I know the answer?  I assume the answer is no, but I don't feel
very certain.
 
  >> If any of these are the sort "in theory" problems that the PEP
  >> alludes to, then we need a full spec for what is and is not
  >> allowed.  It doesn't make sense to tell programmers to follow
  >> unspecified "reasonable" programming practices.

  GMcM> That's easy. In a nested invocation of the Python interpreter,
  GMcM> you can't use a coroutine created in an outer interpreter.

Can we define these situations in a way that doesn't appeal to the
interpreter implementation?  If not, can we at least come up with a
list of what will and will not work at the python level?

  GMcM> In the Python 2 documentation, there are 6 caveats listed in
  GMcM> the thread module. That's a couple order of magnitudes
  GMcM> different from the actual number of ways you can screw up
  GMcM> using the thread module.

The caveats for the thread module seem like pretty minor stuff to me.
If you are writing a threaded application, don't expect code to
continue running after the main thread has exited.

The caveats for microthreads seems to cover a vast swath of territory:
The use of libraries or extension modules that involve callbacks or
instances with __xxx__ methods may lead to application failure.  I
worry about it becomes it doesn't sound very modular.  The use of
coroutines in one library means I can't use that library in certain
special cases in my own code.

I'm sorry if I sound grumpy, but I feel like I can't get a straight
answer despite several attempts.  At some level, it's fine to say that
there are some corner cases that won't work well with microthreads or
coroutines implemented on top of stackless python.  But I think the
PEP should discuss the details.  I've never written in an application
that uses stackless-based microthreads or coroutines so I don't feel
confident in my judgement of the situation.

Which gets back to Bernd's original question:

  GMcM> >   BR> """ Don't use classes and libraries that use classes when
  GMcM> >   BR> IO in microthreaded programs!  """
  GMcM> > 
  GMcM> >   BR> which might indeed be a problem. Am I overlooking something
  GMcM> >   BR> fundamental here?

and the synopsis of your answer:

  GMcM> Synopsis of my reply: this is more a problem with uthreads 
  GMcM> than coroutines. In any (real) thread, you're limited to dealing 
  GMcM> with one non-blocking IO technique (eg, select) without going 
  GMcM> into a busy loop. If you're dedicating a (real) thread to select, it 
  GMcM> makes more sense to use coroutines than uthreads.

I don't understand how this addresses the question, but perhaps I
haven't seen your reply yet.  Mail gets through to python-dev and
stackless at different rates.

Jeremy



From bckfnn at worldonline.dk  Tue Mar 13 21:34:17 2001
From: bckfnn at worldonline.dk (Finn Bock)
Date: Tue, 13 Mar 2001 20:34:17 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15021.24645.357064.856281@anthem.wooz.org>
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org>
Message-ID: <3aae83f7.41314216@smtp.worldonline.dk>

>    GvR> Yes, that was on the list once but got dropped.  You might
>    GvR> want to get together with Finn and Samuele to see what their
>    GvR> rules are.  (They allow the use of some keywords at least as
>    GvR> keyword=expression arguments and as object.attribute names.)

[Barry]

>I'm actually a little surprised that the "Jython vs. CPython"
>differences page doesn't describe this (or am I missing it?):

It is mentioned at the bottom of 

     http://www.jython.org/docs/usejava.html

>    http://www.jython.org/docs/differences.html
>
>I thought it used to.

I have now also added it to the difference page.

>IIRC, keywords were allowed if there was no question of it introducing
>a statement.  So yes, keywords were allowed after the dot in attribute
>lookups, and as keywords in argument lists, but not as variable names
>on the lhs of an assignment (I don't remember if they were legal on
>the rhs, but it seems like that ought to be okay, and is actually
>necessary if you allow them argument lists).

- after "def"
- after a dot "." in trailer
- after "import"
- after "from" (in an import stmt)
- and as keyword argument names in arglist

>It would eliminate much of the need for writing obfuscated code like
>"class_" or "klass".

Not the rules as Jython currently has it. Jython only allows the *use*
of external code which contain reserved words as class, method or
attribute names, including overriding such methods.

The distinction between the Name and AnyName grammar productions have
worked very well for us, but I don't think of it as a general "keywords
can be used as identifiers" feature.

regards,
finn



From barry at digicool.com  Tue Mar 13 21:44:04 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Tue, 13 Mar 2001 15:44:04 -0500
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl>
	<200103122332.SAA22948@cj20424-a.reston1.va.home.com>
	<15021.24645.357064.856281@anthem.wooz.org>
	<3aae83f7.41314216@smtp.worldonline.dk>
Message-ID: <15022.34452.183052.362184@anthem.wooz.org>

>>>>> "FB" == Finn Bock <bckfnn at worldonline.dk> writes:

    | - and as keyword argument names in arglist

I think this last one doesn't work:

-------------------- snip snip --------------------
Jython 2.0 on java1.3.0 (JIT: jitc)
Type "copyright", "credits" or "license" for more information.
>>> def foo(class=None): pass
Traceback (innermost last):
  (no code object) at line 0
  File "<console>", line 1
	def foo(class=None): pass
	        ^
SyntaxError: invalid syntax
>>> def foo(print=None): pass
Traceback (innermost last):
  (no code object) at line 0
  File "<console>", line 1
	def foo(print=None): pass
	        ^
SyntaxError: invalid syntax
-------------------- snip snip --------------------

-Barry



From akuchlin at mems-exchange.org  Tue Mar 13 22:33:31 2001
From: akuchlin at mems-exchange.org (Andrew Kuchling)
Date: Tue, 13 Mar 2001 16:33:31 -0500
Subject: [Python-Dev] Removing doc/howto on python.org
Message-ID: <E14cwQ7-0003q3-00@ute.cnri.reston.va.us>

Looking at a bug report Fred forwarded, I realized that after
py-howto.sourceforge.net was set up, www.python.org/doc/howto was
never changed to redirect to the SF site instead.  As of this
afternoon, that's now done; links on www.python.org have been updated,
and I've added the redirect.

Question: is it worth blowing away the doc/howto/ tree now, or should
it just be left there, inaccessible, until work on www.python.org
resumes?

--amk



From tismer at tismer.com  Tue Mar 13 23:44:22 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 13 Mar 2001 23:44:22 +0100
Subject: [Stackless] Re: [Python-Dev] comments on PEP 219
References: <200103131447.HAA32016@localhost.localdomain>
		<3AAE38C3.2C9BAA08@tismer.com> <15021.35084.46284.376573@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <3AAEA2C6.7F1DD2CE@tismer.com>


Jeremy Hylton wrote:
> 
> >>>>> "CT" == Christian Tismer <tismer at tismer.com> writes:
> 
>   CT> Maybe I'm repeating myself, but I'd like to clarify: I do not
>   CT> plan to introduce anything that forces anybody to change her
>   CT> code. This is all about extending the current capabilities.
> 
> The problem with this position is that C code that uses the old APIs
> interferes in odd ways with features that depend on stackless,
> e.g. the __xxx__ methods.[*]  If the old APIs work but are not
> compatible, we'll end up having to rewrite all our extensions so that
> they play nicely with stackless.

My idea was to keep all interfaces as they are, add a stackless flag,
and add stackless versions of all those calls. These are used when
they exist. If not, the old, recursive calls are used. If we can
find such a flag, we're fine. If not, we're hosed.
There is no point in forcing everybody to play nicely with Stackless.

> If we change the core and standard extensions to use stackless
> interfaces, then this style will become the standard style.  If the
> interface is simple, this is no problem.  If the interface is complex,
> it may be a problem.  My point is that if we change the core APIs, we
> place a new burden on extension writers.

My point is that if we extend the core APIs, we do not place
a burden on extension writers, given that we can do the extension
in a transparent way.

> Jeremy
> 
>     [*] If we fix the type-class dichotomy, will it have any effect on
>     the stackful nature of some of these C calls?

I truely cannot answer this one.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From gmcm at hypernet.com  Tue Mar 13 23:16:24 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 17:16:24 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.33164.673632.351851@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE3676.13712.7B4F001D@localhost>
Message-ID: <3AAE55E8.4865.7BC9D6B2@localhost>

[Jeremy]
>   >> If a programmer uses a library implement via coroutines, can
>   she >> call library methods from an __xxx__ method?
> 
>   GMcM> Certain situations won't work, but you knew that.
> 
> I expected that some won't work, but no one seems willing to tell
> me exactly which ones will and which ones won't.  Should the
> caveat in the documentation say "avoid using certain __xxx__
> methods" <0.9 wink>. 

Within an __xxx__ method, you cannot *use* a coroutine not 
created in that method. That is true in current Stackless and 
will be true in Stack-lite. The presence of "library" in the 
question is a distraction.

I guess if you think of a coroutine as just another kind of 
callable object, this looks like a strong limitation. But you 
don't find yourself thinking of threads as plain old callable 
objects, do you? In a threaded program, no matter how 
carefully designed, there is a lot of thread detritus lying 
around. If you don't stay concious of the transfers of control 
that may happen, you will screw up.

Despite the limitation on using coroutines in magic methods, 
coroutines have an advantage in that tranfers of control only 
happen when you want them to. So avoiding unwanted 
transfers of control is vastly easier.
 
>   >> Can coroutines or microthreads co-exist with callbacks
>   invoked by >> C extensions?
> 
>   GMcM> Again, in certain situations it won't work. Again, you
>   knew GMcM> that.
> 
> Wasn't sure.

It's exactly the same situation.
 
>   >> Can a program do any microthread IO in an __call__ method?
> 
>   GMcM> Considering you know the answer to that one too, you
>   could've GMcM> phrased it as a parsable question.
> 
> Do I know the answer?  I assume the answer is no, but I don't
> feel very certain.

What is "microthreaded IO"? Probably the attempt to yield 
control if the IO operation would block. Would doing that 
inside __call__ work with microthreads? No. 

It's not my decision over whether this particular situation 
needs to be documented. Somtime between the 2nd and 5th 
times the programmer encounters this exception, they'll say 
"Oh phooey, I can't do this in __call__, I need an explicit 
method instead."  Python has never claimed that __xxx__ 
methods are safe as milk. Quite the contrary.

 
>   >> If any of these are the sort "in theory" problems that the
>   PEP >> alludes to, then we need a full spec for what is and is
>   not >> allowed.  It doesn't make sense to tell programmers to
>   follow >> unspecified "reasonable" programming practices.
> 
>   GMcM> That's easy. In a nested invocation of the Python
>   interpreter, GMcM> you can't use a coroutine created in an
>   outer interpreter.
> 
> Can we define these situations in a way that doesn't appeal to
> the interpreter implementation? 

No, because it's implementation dependent.

> If not, can we at least come up
> with a list of what will and will not work at the python level?

Does Python attempt to catalogue all the ways you can screw 
up using magic methods? Using threads? How 'bout the 
metaclass hook? Even stronger, do we catalogue all the ways 
that an end-user-programmer can get bit by using a library 
written by someone else that makes use of these facilities?
 
>   GMcM> In the Python 2 documentation, there are 6 caveats listed
>   in GMcM> the thread module. That's a couple order of magnitudes
>   GMcM> different from the actual number of ways you can screw up
>   GMcM> using the thread module.
> 
> The caveats for the thread module seem like pretty minor stuff to
> me. If you are writing a threaded application, don't expect code
> to continue running after the main thread has exited.

Well, the thread caveats don't mention the consequences of 
starting and running a thread within an __init__ method.  

> The caveats for microthreads seems to cover a vast swath of
> territory: The use of libraries or extension modules that involve
> callbacks or instances with __xxx__ methods may lead to
> application failure. 

While your statement is true on the face of it, it is very 
misleading. Things will only fall apart when you code an 
__xxx__ method or callback that uses a pre-existing coroutine 
(or does a uthread swap). You can very easily get in trouble 
right now with threads and callbacks. But the real point is that 
it is *you* the programmer trying to do something that won't 
work (and, BTW, getting notified right away), not some library 
pulling a fast one on you. (Yes, the library could make things 
very hard for you, but that's nothing new.)

Application programmers do not need magic methods. Ever. 
They are very handy for people creating libraries for application 
programmers to use, but we already presume (naively) that 
these people know what they're doing.

> I worry about it becomes it doesn't sound
> very modular.  The use of coroutines in one library means I can't
> use that library in certain special cases in my own code.

With a little familiarity, you'll find that coroutines are a good 
deal more modular than threads.

In order for that library to violate your expectations, that library 
must be concious of multiple coroutines (otherwise, it's just a 
plain stackfull call / return). It must have kept a coroutine from 
some other call, or had you pass one in. So you (if at all 
cluefull <wink>) will be concious that something is going on 
here.

The issue is the same as if you used a framework which used 
real threads, but never documented anything about the 
threads. You code callbacks that naively and independently 
mutate a global collection. Do you blame Python?

> I'm sorry if I sound grumpy, but I feel like I can't get a
> straight answer despite several attempts.  At some level, it's
> fine to say that there are some corner cases that won't work well
> with microthreads or coroutines implemented on top of stackless
> python.  But I think the PEP should discuss the details.  I've
> never written in an application that uses stackless-based
> microthreads or coroutines so I don't feel confident in my
> judgement of the situation.

And where on the fearful to confident scale was the Jeremy 
just getting introduced to threads?
 
> Which gets back to Bernd's original question:
> 
>   GMcM> >   BR> """ Don't use classes and libraries that use
>   classes when GMcM> >   BR> IO in microthreaded programs!  """
>   GMcM> > GMcM> >   BR> which might indeed be a problem. Am I
>   overlooking something GMcM> >   BR> fundamental here?
> 
> and the synopsis of your answer:
> 
>   GMcM> Synopsis of my reply: this is more a problem with
>   uthreads GMcM> than coroutines. In any (real) thread, you're
>   limited to dealing GMcM> with one non-blocking IO technique
>   (eg, select) without going GMcM> into a busy loop. If you're
>   dedicating a (real) thread to select, it GMcM> makes more sense
>   to use coroutines than uthreads.
> 
> I don't understand how this addresses the question, but perhaps I
> haven't seen your reply yet.  Mail gets through to python-dev and
> stackless at different rates.

Coroutines only swap voluntarily. It's very obvious where these 
transfers of control take place hence simple to control when 
they take place. My suspicion is that most people use 
uthreads because they use a familiar model. Not many people 
are used to coroutines, but many situations would be more 
profitably approached with coroutines than uthreads.

- Gordon



From fredrik at pythonware.com  Wed Mar 14 01:28:20 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 01:28:20 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org>
Message-ID: <000b01c0ac1d$ad79bec0$e46940d5@hagrid>

barry wrote:
>
>    | - and as keyword argument names in arglist
>
> I think this last one doesn't work:
> 
> -------------------- snip snip --------------------
> Jython 2.0 on java1.3.0 (JIT: jitc)
> Type "copyright", "credits" or "license" for more information.
> >>> def foo(class=None): pass
> Traceback (innermost last):
>   (no code object) at line 0
>   File "<console>", line 1
> def foo(class=None): pass
>         ^
> SyntaxError: invalid syntax
> >>> def foo(print=None): pass
> Traceback (innermost last):
>   (no code object) at line 0
>   File "<console>", line 1
> def foo(print=None): pass
>         ^
> SyntaxError: invalid syntax
> -------------------- snip snip --------------------

>>> def spam(**kw):
...     print kw
...
>>> spam(class=1)
{'class': 1}
>>> spam(print=1)
{'print': 1}

Cheers /F




From guido at digicool.com  Wed Mar 14 01:55:54 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 13 Mar 2001 19:55:54 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: Your message of "Tue, 13 Mar 2001 17:16:24 EST."
             <3AAE55E8.4865.7BC9D6B2@localhost> 
References: <3AAE3676.13712.7B4F001D@localhost>  
            <3AAE55E8.4865.7BC9D6B2@localhost> 
Message-ID: <200103140055.TAA02495@cj20424-a.reston1.va.home.com>

I've been following this discussion anxiously.  There's one
application of stackless where I think the restrictions *do* come into
play.  Gordon wrote a nice socket demo where multiple coroutines or
uthreads were scheduled by a single scheduler that did a select() on
all open sockets.  I would think that if you use this a lot, e.g. for
all your socket I/O, you might get in trouble sometimes when you
initiate a socket operation from within e.g. __init__ but find you
have to complete it later.

How realistic is this danger?  How serious is this demo?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From greg at cosc.canterbury.ac.nz  Wed Mar 14 02:28:49 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Mar 2001 14:28:49 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE0FE3.2206.7AB85588@localhost>
Message-ID: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>

Gordon McMillan <gmcm at hypernet.com>:

> But magic methods are a convenience. There's 
> absolutely nothing there that can't be done another way.

Strictly speaking that's true, but from a practical standpoint
I think you will *have* to address __init__ at least, because
it is so ubiquitous and ingrained in the Python programmer's
psyche. Asking Python programmers to give up using __init__
methods will be greeted with about as much enthusiasm as if
you asked them to give up using all identifiers containing
the leter 'e'. :-)

>  - a GUI. Again, no big deal

Sorry, but I think it *is* a significantly large deal...

> be careful that the other threads don't 
> touch the GUI directly. It's basically the same issue with 
> Stackless.

But the other threads don't have to touch the GUI directly
to be a problem.

Suppose I'm building an IDE and I want a button which spawns
a microthread to execute the user's code. The thread doesn't
make any GUI calls itself, but it's spawned from inside a
callback, which, if I understand correctly, will be impossible.

> The one comparable situation 
> in normal Python is crossing threads in callbacks. With the 
> exception of a couple of complete madmen (doing COM 
> support), everyone else learns to avoid the situation.

But if you can't even *start* a thread using a callback,
how do you do anything with threads at all?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From gmcm at hypernet.com  Wed Mar 14 03:22:44 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 21:22:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140055.TAA02495@cj20424-a.reston1.va.home.com>
References: Your message of "Tue, 13 Mar 2001 17:16:24 EST."             <3AAE55E8.4865.7BC9D6B2@localhost> 
Message-ID: <3AAE8FA4.31567.7CAB5C89@localhost>

[Guido]
> I've been following this discussion anxiously.  There's one
> application of stackless where I think the restrictions *do* come
> into play.  Gordon wrote a nice socket demo where multiple
> coroutines or uthreads were scheduled by a single scheduler that
> did a select() on all open sockets.  I would think that if you
> use this a lot, e.g. for all your socket I/O, you might get in
> trouble sometimes when you initiate a socket operation from
> within e.g. __init__ but find you have to complete it later.

Exactly as hard as it is not to run() a thread from within the 
Thread __init__. Most threaders have probably long forgotten 
that they tried that -- once.

> How realistic is this danger?  How serious is this demo?

It's not a demo. It's in use (proprietary code layered on top of 
SelectDispatcher which is open) as part of a service a major 
player in the video editting industry has recently launched, 
both on the client and server side. Anyone in that industry can 
probably figure out who and (if they read the trades) maybe 
even what from the above, but I'm not comfortable saying more 
publicly.

- Gordon



From gmcm at hypernet.com  Wed Mar 14 03:55:44 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 21:55:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>
References: <3AAE0FE3.2206.7AB85588@localhost>
Message-ID: <3AAE9760.19887.7CC991FF@localhost>

Greg Ewing wrote:

> Gordon McMillan <gmcm at hypernet.com>:
> 
> > But magic methods are a convenience. There's 
> > absolutely nothing there that can't be done another way.
> 
> Strictly speaking that's true, but from a practical standpoint I
> think you will *have* to address __init__ at least, because it is
> so ubiquitous and ingrained in the Python programmer's psyche.
> Asking Python programmers to give up using __init__ methods will
> be greeted with about as much enthusiasm as if you asked them to
> give up using all identifiers containing the leter 'e'. :-)

No one's asking them to give up __init__. Just asking them 
not to transfer control from inside an __init__. There are good 
reasons not to transfer control to another thread from within an 
__init__, too.
 
> >  - a GUI. Again, no big deal
> 
> Sorry, but I think it *is* a significantly large deal...
> 
> > be careful that the other threads don't 
> > touch the GUI directly. It's basically the same issue with
> > Stackless.
> 
> But the other threads don't have to touch the GUI directly
> to be a problem.
> 
> Suppose I'm building an IDE and I want a button which spawns a
> microthread to execute the user's code. The thread doesn't make
> any GUI calls itself, but it's spawned from inside a callback,
> which, if I understand correctly, will be impossible.

For a uthread, if it swaps out, yes, because that's an attempt 
to transfer to another uthread not spawned by the callback. So 
you will get an exception if you try it. If you simply want to 
create and use coroutines from within the callback, that's fine 
(though not terribly useful, since the GUI is blocked till you're 
done).
 
> > The one comparable situation 
> > in normal Python is crossing threads in callbacks. With the
> > exception of a couple of complete madmen (doing COM support),
> > everyone else learns to avoid the situation.
> 
> But if you can't even *start* a thread using a callback,
> how do you do anything with threads at all?

Checking the couple GUIs I've done that use threads (mostly I 
use idletasks in a GUI for background stuff) I notice I create 
the threads before starting the GUI. So in this case, I'd 
probably have a worker thread (real) and the GUI thread (real). 
The callback would queue up some work for the worker thread 
and return. The worker thread can use continuations or 
uthreads all it wants.

My comments about GUIs were basically saying that you 
*have* to think about this stuff when you design a GUI - they 
all have rather strong opinions about how you app should be 
architected. You can get into trouble with any of the 
techniques (events, threads, idletasks...) they promote / allow 
/ use. I know it's gotten better, but not very long ago you had 
to be very careful simply to get TK and threads to coexist.

I usually use idle tasks precisely because the chore of 
breaking my task into 0.1 sec chunks is usually less onerous 
than trying to get the GUI to let me do it some other way.

[Now I'll get floods of emails telling me *this* GUI lets me do it 
*that* way...  As far as I'm concerned, "least worst" is all any 
GUI can aspire to.]

- Gordon



From tim.one at home.com  Wed Mar 14 04:04:31 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 13 Mar 2001 22:04:31 -0500
Subject: [Python-Dev] comments on PEP 219
In-Reply-To: <15021.26231.776384.91347@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIHJFAA.tim.one@home.com>

[Jeremy Hylton]
> ...
> One other set of issues, that is sort-of out of bounds for this
> particular PEP, is what control features do we want that can only be
> implemented with stackless.  Can we implement generators or coroutines
> efficiently without a stackless approach?

Icon/CLU-style generator/iterators always return/suspend directly to their
immediate caller/resumer, so it's impossible to get a C stack frame "stuck in
the middle":  whenever they're ready to yield (suspend or return), there's
never anything between them and the context that gave them control  (and
whether the context was coded in C or Python -- generators don't care).

While Icon/CLU do not do so, a generator/iterator in this sense can be a
self-contained object, passed around and resumed by anyone who feels like it;
this kind of object is little more than a single Python execution frame,
popped from the Python stack upon suspension and pushed back on upon
resumption.  For this reason, recursive interpreter calls don't bother it:
whenever it stops or pauses, it's at the tip of the current thread of
control, and returns control to "the next" frame, just like a vanilla
function return.  So if the stack is a linear list in the absence of
generators, it remains so in their presence.  It also follows that it's fine
to resume a generator by making a recursive call into the interpreter (the
resumption sequence differs from a function call in that it must set up the
guts of the eval loop from the state saved in the generator's execution
frame, rather than create a new execution frame).

But Guido usually has in mind a much fancier form of generator (note:  contra
PEP 219, I didn't write generator.py -- Guido wrote that after hearing me say
"generator" and falling for Majewski's hypergeneralization of the concept
<0.8 wink>), which can suspend to *any* routine "up the chain".  Then C stack
frames can certainly get stuck in the middle, and so that style of generator
is much harder to implement given the way the interpreter currently works.
In Icon *this* style of "generator" is almost never used, in part because it
requires using Icon's optional "co-expression" facilities (which are optional
because they require hairy platform-dependent assembler to trick the platform
C into supporting multiple stacks; Icon's generators don't need any of that).
CLU has nothing like it.

Ditto for coroutines.




From skip at pobox.com  Wed Mar 14 04:12:02 2001
From: skip at pobox.com (Skip Montanaro)
Date: Tue, 13 Mar 2001 21:12:02 -0600 (CST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9760.19887.7CC991FF@localhost>
References: <3AAE0FE3.2206.7AB85588@localhost>
	<3AAE9760.19887.7CC991FF@localhost>
Message-ID: <15022.57730.265706.483989@beluga.mojam.com>

>>>>> "Gordon" == Gordon McMillan <gmcm at hypernet.com> writes:

    Gordon> No one's asking them to give up __init__. Just asking them not
    Gordon> to transfer control from inside an __init__. There are good
    Gordon> reasons not to transfer control to another thread from within an
    Gordon> __init__, too.
 
Is this same restriction placed on all "magic" methods like __getitem__?  Is
this the semantic difference between Stackless and CPython that people are
getting all in a lather about?

Skip






From gmcm at hypernet.com  Wed Mar 14 04:25:03 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Tue, 13 Mar 2001 22:25:03 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.57730.265706.483989@beluga.mojam.com>
References: <3AAE9760.19887.7CC991FF@localhost>
Message-ID: <3AAE9E3F.9635.7CE46C9C@localhost>

> >>>>> "Gordon" == Gordon McMillan <gmcm at hypernet.com> writes:
> 
>     Gordon> No one's asking them to give up __init__. Just asking
>     them not Gordon> to transfer control from inside an __init__.
>     There are good Gordon> reasons not to transfer control to
>     another thread from within an Gordon> __init__, too.
> 
> Is this same restriction placed on all "magic" methods like
> __getitem__?  

In the absence of making them interpreter-recursion free, yes.

> Is this the semantic difference between Stackless
> and CPython that people are getting all in a lather about?

What semantic difference? You can't transfer control to a 
coroutine / urthread in a magic method in CPython, either 
<wink>.

- Gordon



From jeremy at alum.mit.edu  Wed Mar 14 02:17:39 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Tue, 13 Mar 2001 20:17:39 -0500 (EST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9E3F.9635.7CE46C9C@localhost>
References: <3AAE9760.19887.7CC991FF@localhost>
	<3AAE9E3F.9635.7CE46C9C@localhost>
Message-ID: <15022.50867.210827.597710@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GMcM" == Gordon McMillan <gmcm at hypernet.com> writes:

  >> Is this the semantic difference between Stackless and CPython
  >> that people are getting all in a lather about?

  GMcM> What semantic difference? You can't transfer control to a
  GMcM> coroutine / urthread in a magic method in CPython, either
  GMcM> <wink>.

If I have a library or class that uses threads under the covers, I can
create the threads in whatever code block I want, regardless of what
is on the call stack above the block.  The reason that coroutines /
uthreads are different is that the semantics of control transfers are
tied to what the call stack looks like a) when the thread is created
and b) when a control transfer is attempted.

This restriction seems quite at odds with modularity.  (Could I import
a module that creates a thread within an __init__ method?)  The
correctness of a library or class depends on the entire call chain
involved in its use.

It's not at all modular, because a programmer could make a local
decision about organizing a particular module and cause errors in a
module that don't even use directly.  This would occur if module A
uses uthreads, module B is a client of module A, and the user writes a
program that uses module B.  He unsuspectingly adds a call to module A
in an __init__ method and *boom*.

Jeremy

"Python is a language in which the use of uthreads in a module you
didn't know existed can render your own program unusable."  <wink>



From greg at cosc.canterbury.ac.nz  Wed Mar 14 06:09:42 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 14 Mar 2001 18:09:42 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAE9760.19887.7CC991FF@localhost>
Message-ID: <200103140509.SAA05205@s454.cosc.canterbury.ac.nz>

> I'd probably have a worker thread (real) and the GUI thread (real). 

If I have to use real threads to get my uthreads to work
properly, there doesn't seem to be much point in using
uthreads to begin with.

> you *have* to think about this stuff when you design a GUI...
> You can get into trouble with any of the techniques...
> not very long ago you had to be very careful simply to get 
> TK and threads to coexist.

Microthreads should *free* one from all that nonsense. They
should be simple, straightforward, easy to use, and bulletproof.
Instead it seems they're going to be just as tricky to use
properly, only in different ways.

Oh, well, perhaps I'll take another look after a few more
releases and see if anything has improved.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Wed Mar 14 06:34:11 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 00:34:11 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131142.GAA30567@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEIMJFAA.tim.one@home.com>

[Paul Prescod]
> David Ascher suggested during the talk that comparisons of floats could
> raise a warning unless you turned that warning off (which only
> knowledgable people would do). I think that would go a long way to
> helping them find and deal with serious floating point inaccuracies in
> their code.

It would go a very short way -- but that may be better than nothing.  Most fp
disasters have to do with "catastrophic cancellation" (a tech term, not a
pejorative), and comparisons have nothing to do with those.  Alas, CC can't
be detected automatically short of implementing interval arithmetic, and even
then tends to raise way too many false alarms unless used in algorithms
designed specifically to exploit interval arithmetic.

[Guido]
> You mean only for == and !=, right?

You have to do all comparisons or none (see below), but in the former case a
warning is silly (groundless paranoia) *unless* the comparands are "close".

Before we boosted repr(float) precision so that people could *see* right off
that they didn't understand Python fp arithmetic, complaints came later.  For
example, I've lost track of how many times I've explained variants of this
one:

Q: How come this loop goes around 11 times?

>>> delta = 0.1
>>> x = 0.0
>>> while x < 1.0:   # no == or != here
...     print x
...     x = x + delta
...

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
>>>

A: It's because 0.1 is not exactly representable in binary floating-point.

Just once out of all those times, someone came back several days later after
spending many hours struggling to understand what that really meant and
implied.  Their followup question was depressingly insightful:

Q. OK, I understand now that for 754 doubles, the closest possible
   approximation to one tenth is actually a little bit *larger* than
   0.1.  So how come when I add a thing *bigger* than one tenth together
   ten times, I get a result *smaller* than one?

the-fun-never-ends-ly y'rs  - tim




From tim.one at home.com  Wed Mar 14 07:01:24 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 01:01:24 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <Pine.LNX.4.10.10103131039260.13108-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEIOJFAA.tim.one@home.com>

[Ka-Ping Yee]
> I'll argue now -- just as i argued back then, but louder! -- that
> this isn't necessary.  repr(1.1) can be 1.1 without losing any precision.
>
> Simply stated, you only need to display as many decimal places as are
> necessary to regenerate the number.  So if x happens to be the
> floating-point number closest to 1.1, then 1.1 is all you have to show.
>
> By definition, if you type x = 1.1, x will get the floating-point
> number closest in value to 1.1.

This claim is simply false unless the platform string->float routines do
proper rounding, and that's more demanding than even the anal 754 std
requires (because in the general case proper rounding requires bigint
arithmetic).

> So x will print as 1.1.

By magic <0.1 wink>?

This *can* work, but only if Python does float<->string conversions itself,
leaving the platform libc out of it.  I gave references to directly relevant
papers, and to David Gay's NETLIB implementation code, the last time we went
thru this.  Note that Gay's code bristles with platform #ifdef's, because
there is no portable way in C89 to get the bit-level info this requires.
It's some of the most excruciatingly delicate code I've ever plowed thru.  If
you want to submit it as a patch, I expect Guido will require a promise in
blood that he'll never have to maintain it <wink>.

BTW, Scheme implementations are required to do proper rounding in both
string<->float directions, and minimal-length (wrt idempotence) float->string
conversions (provided that a given Scheme supports floats at all).  That was
in fact the original inspiration for Clinger, Steele and White's work in this
area.  It's exactly what you want too (because it's exactly what you need to
make your earlier claims true).  A more recent paper by Dybvig and ??? (can't
remember now) builds on the earlier work, using Gay's code by reference as a
subroutine, and speeding some of the other cases where Gay's code is slothful
by a factor of about 70.

scheme-does-a-better-job-on-numerics-in-many-respects-ly y'rs  - tim




From tim.one at home.com  Wed Mar 14 07:21:57 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 01:21:57 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103140509.SAA05205@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEIPJFAA.tim.one@home.com>

[Greg Ewing]
> If I have to use real threads to get my uthreads to work
> properly, there doesn't seem to be much point in using
> uthreads to begin with.
> ...
> Microthreads should *free* one from all that nonsense. They
> should be simple, straightforward, easy to use, and bulletproof.
> Instead it seems they're going to be just as tricky to use
> properly, only in different ways.

Stackless uthreads don't exist to free you from nonsense, they exist because
they're much lighter than OS-level threads.  You can have many more of them
and context switching is much quicker.  Part of the price is that they're not
as flexible as OS-level threads:  because they get no support at all from the
OS, they have no way to deal with the way C (or any other language) uses the
HW stack (from where most of the odd-sounding restrictions derive).

One thing that impressed me at the Python Conference last week was how many
of the talks I attended presented work that relied on, or was in the process
of moving to, Stackless.  This stuff has *very* enthused users!  Unsure how
many rely on uthreads vs how many on coroutines (Stackless wasn't the focus
of any these talks), but they're the same deal wrt restrictions.

BTW, I don't know of a coroutine facility in any x-platform language that
plays nicely (in the sense of not imposing mounds of implementation-derived
restrictions) across foreign-language boundaries.  If you do, let's get a
reference so we can rip off their secrets.

uthreads-are-much-easier-to-provide-in-an-os-than-in-a-language-ly
    y'rs  - tim




From tim.one at home.com  Wed Mar 14 08:27:21 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 02:27:21 -0500
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <200103131532.f2DFWpw04691@snark.thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>

[Eric S. Raymond]
> I bit the bullet and hand-rolled a recursive-descent expression parser
> for CML2 to replace the Earley-algorithm parser described in my
> previous note.  It is a little more than twice as fast as the SPARK
> code, cutting the CML2 compiler runtime almost exactly in half.
>
> Sigh.  I had been intending to recommend SPARK for the Python standard
> library -- as I pointed out in my PC9 paper, it would be the last
> piece stock Python needs to be an effective workbench for
> minilanguage construction.  Unfortunately I'm now convinced Paul
> Prescod is right and it's too slow for production use, at least at
> version 0.6.1.

If all you got out of crafting a one-grammar parser by hand is a measly
factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
parser generators for restricted grammars, in C).  For the all-purpose Earley
parser to get that close is really quite an accomplishment!  SPARK was
written primarily for rapid prototyping, at which it excels (how many times
did you change your grammar during development?  how much longer would it
have taken you to adjust had you needed to rework your RD parser each time?).

perhaps-you're-just-praising-it-via-faint-damnation<wink>-ly y'rs  - tim




From fredrik at pythonware.com  Wed Mar 14 09:25:19 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 09:25:19 +0100
Subject: [Python-Dev] CML2 compiler speedup
References: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>
Message-ID: <014401c0ac60$4f0b1c60$e46940d5@hagrid>

tim wrote:
> If all you got out of crafting a one-grammar parser by hand is a measly
> factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> parser generators for restricted grammars, in C).

talking about performance, has anyone played with using SRE's
lastindex/lastgroup stuff with SPARK?

(is there anything else I could do in SRE to make SPARK run faster?)

Cheers /F




From tismer at tismer.com  Wed Mar 14 10:19:44 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 10:19:44 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <3AAE0FE3.2206.7AB85588@localhost>
		<3AAE9760.19887.7CC991FF@localhost> <15022.57730.265706.483989@beluga.mojam.com>
Message-ID: <3AAF37B0.DFCC027A@tismer.com>


Skip Montanaro wrote:
> 
> >>>>> "Gordon" == Gordon McMillan <gmcm at hypernet.com> writes:
> 
>     Gordon> No one's asking them to give up __init__. Just asking them not
>     Gordon> to transfer control from inside an __init__. There are good
>     Gordon> reasons not to transfer control to another thread from within an
>     Gordon> __init__, too.
> 
> Is this same restriction placed on all "magic" methods like __getitem__?  Is
> this the semantic difference between Stackless and CPython that people are
> getting all in a lather about?

Yes, at the moment all __xxx__ stuff.
The semantic difference is at a different location:
Normal function calls are free to switch around. That is the
big advantage over CPython, which might be called a semantic
difference.
The behavior/contraints of __xxx__ has not changed yet, here
both Pythons are exactly the same! :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From tismer at tismer.com  Wed Mar 14 10:39:17 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 10:39:17 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103140128.OAA05167@s454.cosc.canterbury.ac.nz>
Message-ID: <3AAF3C45.1972981F@tismer.com>


Greg Ewing wrote:

<snip>

> Suppose I'm building an IDE and I want a button which spawns
> a microthread to execute the user's code. The thread doesn't
> make any GUI calls itself, but it's spawned from inside a
> callback, which, if I understand correctly, will be impossible.

This doesn't need to be a problem with Microthreads.
Your IDE can spawn a new process at any time. The
process will simply not be started until the interpreter recursion is
done. I think this is exactly what we want.
Similarily the __init__ situation: Usually you want
to create a new process, but you don't care when it
is scheduled, finally.

So, the only remaining restriction is: If you *force* the
system to schedule microthreads in a recursive call, then
you will be biten by the first uthread that returns to
a frame which has been locked by a different interpreter.

It is pretty fine to create uthreads or coroutines in
the context of __init__. Stackless of course allows
to re-use frames that have been in any recursion. The
point is: After a recrusive interpreter is gone, there
is no problem to use its frames.
We just need to avoid to make __init__ the working
horse, which is bad style, anyway.

> > The one comparable situation
> > in normal Python is crossing threads in callbacks. With the
> > exception of a couple of complete madmen (doing COM
> > support), everyone else learns to avoid the situation.
> 
> But if you can't even *start* a thread using a callback,
> how do you do anything with threads at all?

You can *create* a thread using a callback. It will be started
after the callback is gone. That's sufficient in most cases.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From tim.one at home.com  Wed Mar 14 12:02:12 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 06:02:12 -0500
Subject: [Python-Dev] Minutes from the Numeric Coercion dev-day session
In-Reply-To: <200103131018.FAA30047@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEJIJFAA.tim.one@home.com>

[Guido]
> I'd like to argue about that.  I think the extent to which HWFP
> doesn't work for newbies is mostly related to the change we made in
> 2.0 where repr() (and hence the interactive prompt) show full
> precision, leading to annoyances like repr(1.1) == '1.1000000000000001'.
>
> I've noticed that the number of complaints I see about this went way
> up after 2.0 was released.

Indeed yes, but I think that's a *good* thing.  We can't stop people from
complaining, but we can influence *what* they complain about, and it's
essential for newbies to learn ASAP that they have no idea how binary fp
arithmetic works.  Note that I spend a lot more of my life replying to these
complaints than you <wink>, and I can cut virtually all of them off early now
by pointing to the RepresentationError wiki page.  Before, it was an endless
sequence of "unique" complaints about assorted things that "didn't work
right", and that was much more time-consuming for me.  Of course, it's not a
positive help to the newbies so much as that scaring them early saves them
greater troubles later <no wink>.

Regular c.l.py posters can (& do!) handle this now too, thanks to hearing the
*same* complaint repeatedly now.  For example, over the past two days there
have been more than 40 messages on c.l.py about this, none of them stemming
from the conference or Moshe's PEP, and none of them written by me.  It's a
pattern:

+ A newcomer to Python complains about the interactive-prompt fp display.

+ People quickly uncover that's the least of their problems (that, e.g., they
truly *believe* Python should get dollars and cents exactly right all by
itself, and are programming as if that were true).

+ The fp display is the easiest of all fp surprises to explain fully and
truthfully (although the wiki page should make painfully clear that "easiest"
!= "easy" by a long shot), so is the quickest route toward disabusing them of
their illusions.

+ A few people suggest they use my FixedPoint.py instead; a few more that
they compute using cents instead (using ints or longs); and there's always
some joker who flames that if they're writing code for clients and have such
a poor grasp of fp reality, they should be sued for "technical incompetence".

Except for the flames, this is good in my eyes.

> I expect that most newbies don't use floating point in a fancy way,
> and would never notice it if it was slightly off as long as the output
> was rounded like it was before 2.0.

I couldn't disagree more that ignorance is to be encouraged, either in
newbies or in experts.  Computational numerics is a difficult field with
major consequences in real life, and if the language can't actively *help*
people with that, it should at least avoid encouraging a fool's confidence in
their folly.  If that isn't virulent enough for you <wink>, read Kahan's
recent "Marketing versus Mathematics" rant, here:

    http://www.cs.berkeley.edu/~wkahan/MktgMath.pdf

A point he makes over and over, illustrated with examples, is this:

    Decimal displays of Binary nonintegers cannot always be WYSIWYG.

    Trying to pretend otherwise afflicts both customers and
    implementors with bugs that go mostly misdiagnosed, so ?fixing?
    one bug merely spawns others. 


In a specific example of a nasty real-life bug beginning on page 13, he calls
the conceit (& source of the bug) of rounding fp displays to 15 digits
instead of 17 "a pious fraud".  And he's right.  It spares the implementer
some shallow complaints at the cost of leading naive users down a garden
path, where they end up deeper and deeper in weeds over their heads.

Of course he acknowledges that 17-digit display "[annoys] users who expected
roundoff to degrade only the last displayed digit of simple expressions, and
[confuses] users who did not expect roundoff at all" -- but seeking to fuzz
those truths has worse consequences.

In the end, he smacks up against the same need to favor one group at the
expense of the other:

   Binary floating-point is best for mathematicians, engineers and most
   scientists, and for integers that never get rounded off.  For everyone
   else Decimal floating-point is best because it is the only way What
   You See can be What You Get, which is a big step towards reducing
   programming languages? capture cross-section for programming errors.

He's wrong via omission about the latter, though:  rationals are also a way
to achieve that (so long as you stick to + - * /; decimal fp is still
arguably better once a sqrt or transcendental gets into the pot).

>> Presumably ABC used rationals because usability studies showed
>> they worked best (or didn't they test this?).

> No, I think at best the usability studies showed that floating point
> had problems that the ABC authors weren't able to clearly explain to
> newbies.  There was never an experiment comparing FP to rationals.

>> Presumably the TeachScheme! dialect of Scheme uses rationals for
>> the same reason.

> Probably for the same reasons.

Well, you cannot explain binary fp *clearly* to newbies in reasonable time,
so I can't fault any teacher or newbie-friendly language for running away
from it.  Heck, most college-age newbies are still partly naive about fp
numerics after a good one-semester numerical analysis course (voice of
experience, there).

>> 1/10 and 0.1 are indeed very different beasts to me).

> Another hard question: does that mean that 1 and 1.0 are also very
> different beasts to you?  They weren't to the Alice users who started
> this by expecting 1/4 to represent a quarter turn.

1/4 *is* a quarter turn, and exactly a quarter turn, under every alternative
being discussed (binary fp, decimal fp, rationals).  The only time it isn't
is under Python's current rules.  So the Alice users will (presumably) be
happy with any change whatsoever from the status quo.

They may not be so happy if they do ten 1/10 turns and don't get back to
where they started (which would happen under binary fp, but not decimal fp or
rationals).

Some may even be so unreasonable <wink> as to be unhappy if six 1/6 turns
wasn't a wash (which leaves only rationals as surprise-free).

Paul Dubois wants a way to tag fp literals (see his proposal).  That's
reasonable for his field.  DrScheme's Student levels have a way to tag
literals as inexact too, which allows students to get their toes wet with
binary fp while keeping their gonads on dry land.  Most people can't ride
rationals forever, but they're great training wheels; thoroughly adequate for
dollars-and-cents computations (the denominators don't grow when they're all
the same, so $1.23 computations don't "blow up" in time or space); and a
darned useful tool for dead-serious numeric grownups in sticky numerical
situations (rationals are immune to all of overflow, underflow, roundoff
error, and catastrophic cancellation, when sticking to + - * /).

Given that Python can't be maximally friendly to everyone here, and has a
large base of binary fp users I don't hate at all <wink>, the best I can
dream up is:

    1.3    binary fp, just like now

    1.3_r  exact rational (a tagged fp literal)

    1/3    exact rational

    1./3   binary fp

So, yes, 1.0 and 1 are different beasts to me:  the "." alone and without an
"_r" tag says "I'm an approximation, and approximations are contagious:
inexact in, inexact out".

Note that the only case where this changes the meaning of existing code is

    1/3

But that has to change anyway lest the Alice users stay stuck at 0 forever.

> You know where I'm leaning...  I don't know that newbies are genuinely
> hurt by FP.

They certainly are burned by binary FP if they go on to do any numeric
programming.  The junior high school textbook formula for solving a quadratic
equation is numerically unstable.  Ditto the high school textbook formula for
computing variance.  Etc.  They're *surrounded* by deep pits; but they don't
need to be, except for the lack of *some* way to spell a newbie-friendly
arithmetic type.

> If we do it right, the naive ones will try 11.0/10.0, see
> that it prints 1.1, and be happy;

Cool.  I make a point of never looking at my chest x-rays either <0.9 wink>.

> the persistent ones will try 1.1**2-1.21, ask for an explanation, and
> get a introduction to floating point.  This *doesnt'* have to explain all
> the details, just the two facts that you can lose precision and that 1.1
> isn't representable exactly in binary.

Which leaves them where?  Uncertain & confused (as you say, they *don't* know
all the details, or indeed really any of them -- they just know "things go
wrong", without any handle on predicting the extent of the problems, let
alone any way of controlling them), and without an alternative they *can*
feel confident about (short of sticking to integers, which may well be the
most frequent advice they get on c.l.py).  What kind of way is that to treat
a poor newbie?

I'll close w/ Kahan again:

    Q. Besides its massive size, what distinguishes today?s market for
       floating-point arithmetic from yesteryears? ?

    A. Innocence
       (if not inexperience, na?vet?, ignorance, misconception,
        superstition, 
 )

non-extended-binary-fp-is-an-expert's-tool-ly y'rs  - tim




From bckfnn at worldonline.dk  Wed Mar 14 12:48:51 2001
From: bckfnn at worldonline.dk (Finn Bock)
Date: Wed, 14 Mar 2001 11:48:51 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <15022.34452.183052.362184@anthem.wooz.org>
References: <20010312220425.T404@xs4all.nl> <200103122332.SAA22948@cj20424-a.reston1.va.home.com> <15021.24645.357064.856281@anthem.wooz.org> <3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org>
Message-ID: <3aaf5a78.8312542@smtp.worldonline.dk>

>>>>>> "FB" == Finn Bock <bckfnn at worldonline.dk> writes:
>
>    | - and as keyword argument names in arglist
>
>I think this last one doesn't work:

[Barry]

>-------------------- snip snip --------------------
>Jython 2.0 on java1.3.0 (JIT: jitc)
>Type "copyright", "credits" or "license" for more information.
>>>> def foo(class=None): pass
>Traceback (innermost last):
>  (no code object) at line 0
>  File "<console>", line 1
>	def foo(class=None): pass
>	        ^
>SyntaxError: invalid syntax
>>>> def foo(print=None): pass
>Traceback (innermost last):
>  (no code object) at line 0
>  File "<console>", line 1
>	def foo(print=None): pass
>	        ^
>SyntaxError: invalid syntax
>-------------------- snip snip --------------------

You are trying to use it in the grammer production "varargslist". It
doesn't work there. It only works in the grammer production "arglist".

The distinction is a good example of how jython tries to make it
possible to use reserved words defined in external code, but does not
try to allow the use of reserved words everywhere.

regards,
finn



From bckfnn at worldonline.dk  Wed Mar 14 12:49:54 2001
From: bckfnn at worldonline.dk (Finn Bock)
Date: Wed, 14 Mar 2001 11:49:54 GMT
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <000b01c0ac1d$ad79bec0$e46940d5@hagrid>
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid>
Message-ID: <3aaf5aa5.8357597@smtp.worldonline.dk>

>barry wrote:
>>
>>    | - and as keyword argument names in arglist
>>
>> I think this last one doesn't work:
>> 
>> -------------------- snip snip --------------------
>> Jython 2.0 on java1.3.0 (JIT: jitc)
>> Type "copyright", "credits" or "license" for more information.
>> >>> def foo(class=None): pass
>> Traceback (innermost last):
>>   (no code object) at line 0
>>   File "<console>", line 1
>> def foo(class=None): pass
>>         ^
>> SyntaxError: invalid syntax
>> >>> def foo(print=None): pass
>> Traceback (innermost last):
>>   (no code object) at line 0
>>   File "<console>", line 1
>> def foo(print=None): pass
>>         ^
>> SyntaxError: invalid syntax
>> -------------------- snip snip --------------------

[/F]

>>>> def spam(**kw):
>...     print kw
>...
>>>> spam(class=1)
>{'class': 1}
>>>> spam(print=1)
>{'print': 1}

Exactly.

This feature is mainly used by constructors for java object where
keywords becomes bean property assignments.

  b = JButton(text="Press Me", enabled=1, size=(30, 40))

is a shorthand for

  b = JButton()
  b.setText("Press Me")
  b.setEnabled(1)
  b.setSize(30, 40)

Since the bean property names are outside Jython's control, we allow
AnyName in that position.

regards,
finn



From fredrik at pythonware.com  Wed Mar 14 14:09:51 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 14:09:51 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid> <3aaf5aa5.8357597@smtp.worldonline.dk>
Message-ID: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>

finn wrote:

> >>>> spam(class=1)
> >{'class': 1}
> >>>> spam(print=1)
> >{'print': 1}
> 
> Exactly.

how hard would it be to fix this in CPython?  can it be
done in time for 2.1?  (Thomas?)

Cheers /F




From thomas at xs4all.net  Wed Mar 14 14:58:50 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 14 Mar 2001 14:58:50 +0100
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>; from fredrik@pythonware.com on Wed, Mar 14, 2001 at 02:09:51PM +0100
References: <20010312220425.T404@xs4all.nl><200103122332.SAA22948@cj20424-a.reston1.va.home.com><15021.24645.357064.856281@anthem.wooz.org><3aae83f7.41314216@smtp.worldonline.dk> <15022.34452.183052.362184@anthem.wooz.org> <000b01c0ac1d$ad79bec0$e46940d5@hagrid> <3aaf5aa5.8357597@smtp.worldonline.dk> <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
Message-ID: <20010314145850.D404@xs4all.nl>

On Wed, Mar 14, 2001 at 02:09:51PM +0100, Fredrik Lundh wrote:
> finn wrote:

> > >>>> spam(class=1)
> > >{'class': 1}
> > >>>> spam(print=1)
> > >{'print': 1}
> > 
> > Exactly.

> how hard would it be to fix this in CPython?  can it be
> done in time for 2.1?  (Thomas?)

Well, monday night my jetlag hit very badly (I flew back on the night from
saturday to sunday) and caused me to skip an entire night of sleep. I spent
part of that breaking my brain over the parser :) I have no experience with
parsers or parser-writing, by the way, so this comes hard to me, and I have
no clue how this is solved in other parsers.

I seriously doubt it can be done for 2.1, unless someone knows parsers well
and can deliver an extended version of the current parser well before the
next beta. Changing the parser to something not so limited as our current
parser would be too big a change to slip in right before 2.1. 

Fixing the current parser is possible, but not straightforward. As far as I
can figure out, the parser first breaks up the file in elements and then
classifies the elements, and if an element cannot be classified, it is left
as bareword for the subsequent passes to catch it as either a valid
identifier in a valid context, or a syntax error.

I guess it should be possible to hack the parser so it accepts other
statements where it expects an identifier, and then treats those statements
as strings, but you can't just accept all statements -- some will be needed
to bracket the identifier, or you get weird behaviour when you say 'def ()'.
So you need to maintain a list of acceptible statements and try each of
those... My guess is that it's possible, I just haven't figured out how to
do it yet. Can we force a certain 'ordering' in the keywords (their symbolic
number as #defined in graminit.h) some way ?

Another solution would be to do it explicitly in Grammar. I posted an
attempt at that before, but it hurts. It can be done in two ways, both of
which hurt for different reasons :) For example,

funcdef: 'def' NAME parameters ':' suite

can be changed in

funcdef: 'def' nameorkw parameters ':' suite
nameorkw: NAME | 'def' | 'and' | 'pass' | 'print' | 'return' | ...

or in

funcdef: 'def' (NAME | 'def' | 'and' | 'pass' | 'print' | ...) parameters ':' suite

The first means changing the places that currently accept a NAME, and that
means that all places where the compiler does STR(node) have to be checked.
There is a *lot* of those, and it isn't directly obvious whether they expect
node to be a NAME, or really know that, or think they know that. STR() could
be made to detect 'nameorkw' nodetypes and get the STR() of its first child
if so, but that's really an ugly hack.

The second way is even more of an ugly hack, but it doesn't require any
changes in the parser. It just requires making the Grammar look like random
garbage :) Of course, we could keep the grammar the way it is, and
preprocess it before feeding it to the parser, extracting all keywords
dynamically and sneakily replacing NAME with (NAME | keywords )... hmm...
that might actually be workable. It would still be a hack, though.

Now-for-something-easy--meetings!-ly y'rs ;)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fredrik at pythonware.com  Wed Mar 14 15:03:21 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Wed, 14 Mar 2001 15:03:21 +0100
Subject: [Python-Dev] OT: careful with that perl code
Message-ID: <011601c0ac8f$8cb66b80$0900a8c0@SPIFF>

http://slashdot.org/article.pl?sid=01/03/13/208259&mode=nocomment

    "because he wasn't familiar with the distinction between perl's
    scalar and list context, S. now has a police record"




From jeremy at alum.mit.edu  Wed Mar 14 15:25:49 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 14 Mar 2001 09:25:49 -0500 (EST)
Subject: [Python-Dev] Next-to-last wart in Python syntax.
In-Reply-To: <00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
References: <20010312220425.T404@xs4all.nl>
	<200103122332.SAA22948@cj20424-a.reston1.va.home.com>
	<15021.24645.357064.856281@anthem.wooz.org>
	<3aae83f7.41314216@smtp.worldonline.dk>
	<15022.34452.183052.362184@anthem.wooz.org>
	<000b01c0ac1d$ad79bec0$e46940d5@hagrid>
	<3aaf5aa5.8357597@smtp.worldonline.dk>
	<00ca01c0ac88$0ec93150$0900a8c0@SPIFF>
Message-ID: <15023.32621.173685.834783@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "FL" == Fredrik Lundh <fredrik at pythonware.com> writes:

  FL> finn wrote:
  >> >>>> spam(class=1)
  >> >{'class': 1}
  >> >>>> spam(print=1)
  >> >{'print': 1}
  >>
  >> Exactly.

  FL> how hard would it be to fix this in CPython?  can it be done in
  FL> time for 2.1?  (Thomas?)

Only if he can use the time machine to slip it in before 2.1b1.

Jeremy



From gmcm at hypernet.com  Wed Mar 14 16:08:16 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Wed, 14 Mar 2001 10:08:16 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <15022.50867.210827.597710@w221.z064000254.bwi-md.dsl.cnc.net>
References: <3AAE9E3F.9635.7CE46C9C@localhost>
Message-ID: <3AAF4310.26204.7F683B24@localhost>

[Jeremy]
> >>>>> "GMcM" == Gordon McMillan <gmcm at hypernet.com> writes:
> 
>   >> Is this the semantic difference between Stackless and
>   CPython >> that people are getting all in a lather about?
> 
>   GMcM> What semantic difference? You can't transfer control to a
>   GMcM> coroutine / urthread in a magic method in CPython, either
>   GMcM> <wink>.
> 
> If I have a library or class that uses threads under the covers,
> I can create the threads in whatever code block I want,
> regardless of what is on the call stack above the block.  The
> reason that coroutines / uthreads are different is that the
> semantics of control transfers are tied to what the call stack
> looks like a) when the thread is created and b) when a control
> transfer is attempted.

Just b) I think.
 
> This restriction seems quite at odds with modularity.  (Could I
> import a module that creates a thread within an __init__ method?)
>  The correctness of a library or class depends on the entire call
> chain involved in its use.

Coroutines are not threads, nor are uthreads. Threads are 
used for comparison purposes because for most people, they 
are the only model for transfers of control outside regular call / 
return. My first serious programming language was IBM 
assembler which, at the time, did not have call / return. That 
was one of about 5 common patterns used. So I don't suffer 
from the illusion that call / return is the only way to do things.

In some ways threads make a lousy model for what's going 
on. They are OS level things. If you were able, on your first 
introduction to threads, to immediately fit them into your 
concept of "modularity", then you are truly unique. They are 
antithetical to my notion of modularity.

If you have another model outside threads and call / return, 
trot it out. It's sure to be a fresher horse than this one.
 
> It's not at all modular, because a programmer could make a local
> decision about organizing a particular module and cause errors in
> a module that don't even use directly.  This would occur if
> module A uses uthreads, module B is a client of module A, and the
> user writes a program that uses module B.  He unsuspectingly adds
> a call to module A in an __init__ method and *boom*.

You will find this enormously more difficult to demonstrate 
than assert. Module A does something in the background. 
Therefor module B does something in the background. There 
is no technique for backgrounding processing which does not 
have some implications for the user of module B. If modules A 
and or B are poorly coded, it will have obvious implications for 
the user.

> "Python is a language in which the use of uthreads in a module
> you didn't know existed can render your own program unusable." 
> <wink>

Your arguments are all based on rather fantastical notions of 
evil module writers pulling dirty tricks on clueless innocent 
programmers. In fact, they're based on the idea that the 
programmer was successfully using module AA, then 
switched to using A (which must have been advertised as a 
drop in replacement) and then found that they went "boom" in 
an __init__ method that used to work. Python today has no 
shortage of ways in which evil module writers can cause 
misery for programmers. Stackless does not claim that 
module writers claiming full compatiblity are telling the truth. If 
module A does not suit your needs, go back to module AA.

Obviously, those of us who like Stackless would be delighted 
to have all interpreter recursions removed. It's also obvious 
where your rhetorical argument is headed: Stackless is 
dangerous unless all interpreter recursions are eliminated; it's 
too much work to remove all interpreter recursions until Py4K; 
please reassign this PEP a nineteen digit number.

and-there-is-NO-truth-to-the-rumor-that-stackless-users
-eat-human-flesh-<munch, munch>-ly y'rs

- Gordon



From tismer at tismer.com  Wed Mar 14 16:23:38 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 14 Mar 2001 16:23:38 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <3AAE9E3F.9635.7CE46C9C@localhost> <3AAF4310.26204.7F683B24@localhost>
Message-ID: <3AAF8CFA.58A9A68B@tismer.com>


Gordon McMillan wrote:
> 
> [Jeremy]

<big snip/>

> Obviously, those of us who like Stackless would be delighted
> to have all interpreter recursions removed. It's also obvious
> where your rhetorical argument is headed: Stackless is
> dangerous unless all interpreter recursions are eliminated; it's
> too much work to remove all interpreter recursions until Py4K;
> please reassign this PEP a nineteen digit number.

Of course we would like to see all recursions vanish.
Unfortunately this would make Python's current codebase
vanish almost completely, too, which would be bad. :)

That's the reason to have Stack Lite.

The funny observation after following this thread:
It appears that Stack Lite is in fact best suited for
Microthreads, better than for coroutines.

Reason: Microthreads schedule automatically, when it is allowed.
By normal use, it gives you no trouble to spawn an uthread
from any extension, since the scheduling is done by the
interpreter in charge only if it is active, after all nested
calls have been done.

Hence, Stack Lite gives us *all* of uthreads, and almost all of
generators and coroutines, except for the mentioned cases.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From guido at digicool.com  Wed Mar 14 16:26:23 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 10:26:23 -0500
Subject: [Python-Dev] Kinds
In-Reply-To: Your message of "Tue, 13 Mar 2001 08:38:35 PST."
             <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com> 
References: <ADEOIFHFONCLEEPKCACCIEKKCGAA.paul@pfdubois.com> 
Message-ID: <200103141526.KAA04151@cj20424-a.reston1.va.home.com>

I liked Paul's brief explanation of Kinds.  Maybe we could make it so
that there's a special Kind representing bignums, and eventually that
could become the default (as part of the int unification).  Then
everybody can have it their way.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Wed Mar 14 16:33:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 10:33:50 -0500
Subject: [Python-Dev] New list for Discussion of Python's Numeric Model
In-Reply-To: Your message of "Tue, 13 Mar 2001 19:08:05 +0100."
             <20010313190805.C404@xs4all.nl> 
References: <E14ciAp-0005dJ-00@darjeeling> <15021.35405.877907.11490@w221.z064000254.bwi-md.dsl.cnc.net>  
            <20010313190805.C404@xs4all.nl> 
Message-ID: <200103141533.KAA04216@cj20424-a.reston1.va.home.com>

> I think the main reason for
> separate lists is to allow non-python-dev-ers easy access to the lists. 

Yes, this is the main reason.

I like it, it keeps my inbox separated out.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pedroni at inf.ethz.ch  Wed Mar 14 16:41:03 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Wed, 14 Mar 2001 16:41:03 +0100 (MET)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
Message-ID: <200103141541.QAA03543@core.inf.ethz.ch>

Hi.

First of all I should admit I ignore what have been discussed
at IPC9 about Stackless Python.

My plain question (as jython developer): is there a real intention
to make python stackless in the short term (2.2, 2.3...)
?

AFAIK then for jython there are three option:
1 - Just don't care
2 - A major rewrite with performance issues (but AFAIK nobody has
  the resources for doing that)
3 - try to implement some of the highlevel offered features through threads
   (which could be pointless from a performance point of view:
     e.g. microthreads trough threads, not that nice).
     
The option are 3 just for the theoretical sake of compatibility 
(I don't see the point to port python stackless based code to jython)
 or 1 plus some amount of frustration <wink>. Am I missing something?

The problem will be more serious if the std lib will begin to use
heavily the stackless features.


regards, Samuele Pedroni.




From barry at digicool.com  Wed Mar 14 17:06:57 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 14 Mar 2001 11:06:57 -0500
Subject: [Python-Dev] OT: careful with that perl code
References: <011601c0ac8f$8cb66b80$0900a8c0@SPIFF>
Message-ID: <15023.38689.298294.736516@anthem.wooz.org>

>>>>> "FL" == Fredrik Lundh <fredrik at pythonware.com> writes:

    FL> http://slashdot.org/article.pl?sid=01/03/13/208259&mode=nocomment

    FL>     "because he wasn't familiar with the distinction between
    FL> perl's scalar and list context, S. now has a police record"

If it's true, I don't know what about that article scares / depresses me more.

born-in-the-usa-ly y'rs,
-Barry



From aycock at csc.UVic.CA  Wed Mar 14 19:02:43 2001
From: aycock at csc.UVic.CA (John Aycock)
Date: Wed, 14 Mar 2001 10:02:43 -0800
Subject: [Python-Dev] CML2 compiler speedup
Message-ID: <200103141802.KAA02907@valdes.csc.UVic.CA>

| talking about performance, has anyone played with using SRE's
| lastindex/lastgroup stuff with SPARK?

Not yet.  I will defer to Tim's informed opinion on this.

| (is there anything else I could do in SRE to make SPARK run faster?)

Well, if I'm wishing..  :-)

I would like all the parts of an alternation A|B|C to be searched for
at the same time (my assumption is that they aren't currently).  And
I'd also love a flag that would disable "first then longest" semantics
in favor of always taking the longest match.

John



From thomas at xs4all.net  Wed Mar 14 19:36:17 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 14 Mar 2001 19:36:17 +0100
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <200103141802.KAA02907@valdes.csc.UVic.CA>; from aycock@csc.UVic.CA on Wed, Mar 14, 2001 at 10:02:43AM -0800
References: <200103141802.KAA02907@valdes.csc.UVic.CA>
Message-ID: <20010314193617.F404@xs4all.nl>

On Wed, Mar 14, 2001 at 10:02:43AM -0800, John Aycock wrote:

> I would like all the parts of an alternation A|B|C to be searched for
> at the same time (my assumption is that they aren't currently).  And
> I'd also love a flag that would disable "first then longest" semantics
> in favor of always taking the longest match.

While on that subject.... Is there an easy way to get all the occurances of
a repeating group ? I wanted to do something like 'foo(bar|baz)+' and be
able to retrieve all matches of the group. I fixed it differently now, but I
kept wondering why that wasn't possible.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From esr at golux.thyrsus.com  Tue Mar 13 23:17:42 2001
From: esr at golux.thyrsus.com (Eric)
Date: Tue, 13 Mar 2001 14:17:42 -0800
Subject: [Python-Dev] freeze is broken in 2.x
Message-ID: <E14cx6s-0002zN-00@golux.thyrsus.com>

It appears that the freeze tools are completely broken in 2.x.  This 
is rather unfortunate, as I was hoping to use them to end-run some
objections to CML2 and thereby get python into the Linux kernel tree.

I have fixed some obvious errors (use of the deprecated 'cmp' module;
use of regex) but I have encountered run-time errors that are beyond
my competence to fix.  From a cursory inspection of the code it looks
to me like the freeze tools need adaptation to the new
distutils-centric build process.

Do these tools have a maintainer?  They need some serious work.
--
							>>esr>>



From thomas.heller at ion-tof.com  Wed Mar 14 22:23:39 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Wed, 14 Mar 2001 22:23:39 +0100
Subject: [Python-Dev] freeze is broken in 2.x
References: <E14cx6s-0002zN-00@golux.thyrsus.com>
Message-ID: <05fd01c0accd$0a1dc450$e000a8c0@thomasnotebook>

> It appears that the freeze tools are completely broken in 2.x.  This 
> is rather unfortunate, as I was hoping to use them to end-run some
> objections to CML2 and thereby get python into the Linux kernel tree.
> 
> I have fixed some obvious errors (use of the deprecated 'cmp' module;
> use of regex) but I have encountered run-time errors that are beyond
> my competence to fix.  From a cursory inspection of the code it looks
> to me like the freeze tools need adaptation to the new
> distutils-centric build process.

I have some ideas about merging freeze into distutils, but this is
nothing which could be implemented for 2.1.

> 
> Do these tools have a maintainer?  They need some serious work.

At least they seem to have users.

Thomas




From esr at golux.thyrsus.com  Wed Mar 14 22:37:10 2001
From: esr at golux.thyrsus.com (Eric)
Date: Wed, 14 Mar 2001 13:37:10 -0800
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>; from tim.one@home.com on Wed, Mar 14, 2001 at 02:27:21AM -0500
References: <200103131532.f2DFWpw04691@snark.thyrsus.com> <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com>
Message-ID: <20010314133710.J2046@thyrsus.com>

Tim Peters <tim.one at home.com>:
> If all you got out of crafting a one-grammar parser by hand is a measly
> factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> parser generators for restricted grammars, in C).  For the all-purpose Earley
> parser to get that close is really quite an accomplishment!  SPARK was
> written primarily for rapid prototyping, at which it excels (how many times
> did you change your grammar during development?  how much longer would it
> have taken you to adjust had you needed to rework your RD parser each time?).

SPARK is indeed a wonderful prototyping tool, and I admire John Aycock for
producing it (though he really needs to do better on the documentation).

Unfortunately, Michael Elizabeth Chastain pointed out that it imposes a
bad startup delay in some important cases of CML2 usage.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Americans have the will to resist because you have weapons. 
If you don't have a gun, freedom of speech has no power.
         -- Yoshimi Ishikawa, Japanese author, in the LA Times 15 Oct 1992



From esr at golux.thyrsus.com  Wed Mar 14 22:38:14 2001
From: esr at golux.thyrsus.com (Eric)
Date: Wed, 14 Mar 2001 13:38:14 -0800
Subject: [Python-Dev] CML2 compiler speedup
In-Reply-To: <014401c0ac60$4f0b1c60$e46940d5@hagrid>; from fredrik@pythonware.com on Wed, Mar 14, 2001 at 09:25:19AM +0100
References: <LNBBLJKPBEHFEDALKOLCOEJCJFAA.tim.one@home.com> <014401c0ac60$4f0b1c60$e46940d5@hagrid>
Message-ID: <20010314133814.K2046@thyrsus.com>

Fredrik Lundh <fredrik at pythonware.com>:
> tim wrote:
> > If all you got out of crafting a one-grammar parser by hand is a measly
> > factor of 2, there's likely a factor of 10 still untouched (wrt "the best"
> > parser generators for restricted grammars, in C).
> 
> talking about performance, has anyone played with using SRE's
> lastindex/lastgroup stuff with SPARK?
> 
> (is there anything else I could do in SRE to make SPARK run faster?)

Wouldn't help me, I wasn't using the SPARK scanner.  The overhead really
was in the parsing.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Gun Control: The theory that a woman found dead in an alley, raped and
strangled with her panty hose, is somehow morally superior to a
woman explaining to police how her attacker got that fatal bullet wound.
	-- L. Neil Smith



From guido at digicool.com  Thu Mar 15 00:05:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 14 Mar 2001 18:05:50 -0500
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: Your message of "Tue, 13 Mar 2001 14:17:42 PST."
             <E14cx6s-0002zN-00@golux.thyrsus.com> 
References: <E14cx6s-0002zN-00@golux.thyrsus.com> 
Message-ID: <200103142305.SAA05872@cj20424-a.reston1.va.home.com>

> It appears that the freeze tools are completely broken in 2.x.  This 
> is rather unfortunate, as I was hoping to use them to end-run some
> objections to CML2 and thereby get python into the Linux kernel tree.
> 
> I have fixed some obvious errors (use of the deprecated 'cmp' module;
> use of regex) but I have encountered run-time errors that are beyond
> my competence to fix.  From a cursory inspection of the code it looks
> to me like the freeze tools need adaptation to the new
> distutils-centric build process.
> 
> Do these tools have a maintainer?  They need some serious work.

The last maintainers were me and Mark Hammond, but neither of us has
time to look into this right now.  (At least I know I don't.)

What kind of errors do you encounter?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Thu Mar 15 01:28:15 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 19:28:15 -0500
Subject: [Python-Dev] 2.1b2 next Friday?
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMGJFAA.tim.one@home.com>

We need another beta release (according to me).  Anyone disagree?

If not, let's pump it out next Friday, 23-Mar-2001.  That leaves 3 weeks for
intense final testing before 2.1 final (which PEP 226 has scheduled for
13-Apr-2001).




From greg at cosc.canterbury.ac.nz  Thu Mar 15 01:31:00 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 13:31:00 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AAF3C45.1972981F@tismer.com>
Message-ID: <200103150031.NAA05310@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer at tismer.com>:

> You can *create* a thread using a callback.

Okay, that's not so bad. (An earlier message seemed to
be saying that you couldn't even do that.)

But what about GUIs such as Tkinter which have a
main loop in C that keeps control for the life of
the program? You'll never get back to the base-level
interpreter, not even between callbacks, so how do 
the uthreads get scheduled?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Mar 15 01:47:12 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 13:47:12 +1300 (NZDT)
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEJIJFAA.tim.one@home.com>
Message-ID: <200103150047.NAA05314@s454.cosc.canterbury.ac.nz>

Maybe Python should use decimal FP as the *default* representation
for fractional numbers, with binary FP available as an option for
those who really want it.

Unadorned FP literals would give you decimal FP, as would float().
There would be another syntax for binary FP literals (e.g. a 'b'
suffix) and a bfloat() function.

My first thought was that binary FP literals should have to be
written in hex or octal. ("You want binary FP? Then you can jolly
well learn to THINK in it!") But that might be a little extreme.

By the way, what if CPU designers started providing decimal FP 
in hardware? Could scientists and ordinary mortals then share the
same FP system and be happe? The only disadvantage I can think of 
for the scientists is that a bit more memory would be required, but
memory is cheap nowadays. Are there any other drawbacks that
I haven't thought of?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Thu Mar 15 03:01:50 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 14 Mar 2001 21:01:50 -0500
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <200103150047.NAA05314@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEMIJFAA.tim.one@home.com>

[Greg Ewing]
> Maybe Python should use decimal FP as the *default* representation
> for fractional numbers, with binary FP available as an option for
> those who really want it.

NumPy users would scream bloody murder.

> Unadorned FP literals would give you decimal FP, as would float().
> There would be another syntax for binary FP literals (e.g. a 'b'
> suffix) and a bfloat() function.

Ditto.

> My first thought was that binary FP literals should have to be
> written in hex or octal. ("You want binary FP? Then you can jolly
> well learn to THINK in it!") But that might be a little extreme.

"A little"?  Yes <wink>.  Note that C99 introduces hex fp notation, though,
as it's the only way to be sure you're getting the bits you need (when it
really matters, as it can, e.g., in accurate implementations of math
libraries).

> By the way, what if CPU designers started providing decimal FP
> in hardware? Could scientists and ordinary mortals then share the
> same FP system and be happe?

Sure!  Countless happy users of scientific calculators are evidence of
that -- virtually all calculators use decimal fp, for the obvious human
factors reasons ("obvious", I guess, to everyone except most post-1960's
language designers <wink>).

> The only disadvantage I can think of for the scientists is that a
> bit more memory would be required, but memory is cheap nowadays. Are
> there any other drawbacks that I haven't thought of?

See the Kahan paper I referenced yesterday (also the FAQ mentioned below).
He discusses it briefly.  Base 10 HW fp has small additional speed costs, and
makes error analysis a bit harder (at the boundaries where an exponent goes
up, the gaps between representable fp numbers are larger the larger the
base -- in a sense, e.g., whenever a decimal fp number ends with 5, it's
"wasting" a couple bits of potential precision; in that sense, binary fp is
provably optimal).


Mike Cowlishaw (REXX's father) is currently working hard in this area:

    http://www2.hursley.ibm.com/decimal/

That's an excellent resource for people curious about decimal fp.

REXX has many users in financial and commerical fields, where binary fp is a
nightmare to live with (BTW, REXX does use decimal fp).  An IBM study
referenced in the FAQ found that less than 2% of the numeric fields in
commercial databases contained data of a binary float type; more than half
used the database's form of decimal fp; the rest were of integer types.  It's
reasonable to speculate that much of the binary fp data was being used simply
because it was outside the dynamic range of the database's decimal fp type --
in which case even the tiny "< 2%" is an overstatement.

Maybe 5 years ago I asked Cowlishaw whether Python could "borrow" REXX's
software decimal fp routines.  He said sure.  Ironically, I had more time to
pursue it then than I have now ...

less-than-zero-in-an-unsigned-type-ly y'rs  - tim




From greg at cosc.canterbury.ac.nz  Thu Mar 15 05:02:24 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 15 Mar 2001 17:02:24 +1300 (NZDT)
Subject: WYSIWYG decimal fractions (RE: [Python-Dev] Minutes from the Numeric Coercion dev-day session)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEMIJFAA.tim.one@home.com>
Message-ID: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz>

Tim Peters <tim.one at home.com>:

> NumPy users would scream bloody murder.

It would probably be okay for NumPy to use binary FP by default.
If you're using NumPy, you're probably a scientist or mathematician
already and are aware of the issues.

The same goes for any other extension module designed for
specialist uses, e.g. 3D graphics.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+




From aahz at panix.com  Thu Mar 15 07:14:54 2001
From: aahz at panix.com (aahz at panix.com)
Date: Thu, 15 Mar 2001 01:14:54 -0500 (EST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
Message-ID: <200103150614.BAA04221@panix6.panix.com>

[posted to c.l.py.announce and c.l.py; followups to c.l.py; cc'd to
python-dev]

Okay, folks, here it is, the first draft of the spec for creating Python
maintenance releases.  Note that I'm not on python-dev, so it's probably
better to have the discussion on c.l.py if possible.

            PEP: 6
          Title: Patch and Bug Fix Releases
        Version: $Revision: 1.1 $
         Author: aahz at pobox.com (Aahz)
         Status: Draft
           Type: Informational
        Created: 15-Mar-2001
   Post-History:
     _________________________________________________________________
   
  Abstract
  
    Python has historically had only a single fork of development,
    with releases having the combined purpose of adding new features
    and delivering bug fixes (these kinds of releases will be referred
    to as "feature releases").  This PEP describes how to fork off
    patch releases of old versions for the primary purpose of fixing
    bugs.

    This PEP is not, repeat NOT, a guarantee of the existence of patch
    releases; it only specifies a procedure to be followed if patch
    releases are desired by enough of the Python community willing to
    do the work.


  Motivation
  
    With the move to SourceForge, Python development has accelerated.
    There is a sentiment among part of the community that there was
    too much acceleration, and many people are uncomfortable with
    upgrading to new versions to get bug fixes when so many features
    have been added, sometimes late in the development cycle.

    One solution for this issue is to maintain old feature releases,
    providing bug fixes and (minimal!) feature additions.  This will
    make Python more attractive for enterprise development, where
    Python may need to be installed on hundreds or thousands of
    machines.

    At the same time, many of the core Python developers are
    understandably reluctant to devote a significant fraction of their
    time and energy to what they perceive as grunt work.  On the
    gripping hand, people are likely to feel discomfort around
    installing releases that are not certified by PythonLabs.


  Prohibitions
  
    Patch releases are required to adhere to the following
    restrictions:

    1. There must be zero syntax changes.  All .pyc and .pyo files
       must work (no regeneration needed) with all patch releases
       forked off from a feature release.

    2. There must be no incompatible C API changes.  All extensions
       must continue to work without recompiling in all patch releases
       in the same fork as a feature release.


  Bug Fix Releases
  
    Bug fix releases are a subset of all patch releases; it is
    prohibited to add any features to the core in a bug fix release.
    A patch release that is not a bug fix release may contain minor
    feature enhancements, subject to the Prohibitions section.

    The standard for patches to extensions and modules is a bit more
    lenient, to account for the possible desirability of including a
    module from a future version that contains mostly bug fixes but
    may also have some small feature changes.  (E.g. Fredrik Lundh
    making available the 2.1 sre module for 2.0 and 1.5.2.)


  Version Numbers
  
    Starting with Python 2.0, all feature releases are required to
    have the form X.Y; patch releases will always be of the form
    X.Y.Z.  To clarify the distinction between a bug fix release and a
    patch release, all non-bug fix patch releases will have the suffix
    "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
    bug fix release; and "2.1.2p" is a patch release that contains
    minor feature enhancements.


  Procedure
  
    XXX This section is still a little light (and probably
    controversial!)

    The Patch Czar is the counterpart to the BDFL for patch releases.
    However, the BDFL and designated appointees retain veto power over
    individual patches and the decision of whether to label a patch
    release as a bug fix release.

    As individual patches get contributed to the feature release fork,
    each patch contributor is requested to consider whether the patch
    is a bug fix suitable for inclusion in a patch release.  If the
    patch is considered suitable, the patch contributor will mail the
    SourceForge patch (bug fix?) number to the maintainers' mailing
    list.

    In addition, anyone from the Python community is free to suggest
    patches for inclusion.  Patches may be submitted specifically for
    patch releases; they should follow the guidelines in PEP 3[1].

    The Patch Czar decides when there are a sufficient number of
    patches to warrant a release.  The release gets packaged up,
    including a Windows installer, and made public as a beta release.
    If any new bugs are found, they must be fixed and a new beta
    release publicized.  Once a beta cycle completes with no new bugs
    found, the package is sent to PythonLabs for certification and
    publication on python.org.

    Each beta cycle must last a minimum of one month.


  Issues To Be Resolved
  
    Should the first patch release following any feature release be
    required to be a bug fix release?  (Aahz proposes "yes".)

    Is it allowed to do multiple forks (e.g. is it permitted to have
    both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)

    Does it makes sense for a bug fix release to follow a patch
    release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)

    Exactly how does a candidate patch release get submitted to
    PythonLabs for certification?  And what does "certification" mean,
    anyway?  ;-)

    Who is the Patch Czar?  Is the Patch Czar a single person?  (Aahz
    says "not me alone".  Aahz is willing to do a lot of the
    non-technical work, but Aahz is not a C programmer.)

    What is the equivalent of python-dev for people who are
    responsible for maintaining Python?  (Aahz proposes either
    python-patch or python-maint, hosted at either python.org or
    xs4all.net.)

    Does SourceForge make it possible to maintain both separate and
    combined bug lists for multiple forks?  If not, how do we mark
    bugs fixed in different forks?  (Simplest is to simply generate a
    new bug for each fork that it gets fixed in, referring back to the
    main bug number for details.)


  References
  
    [1] PEP 3, Hylton, http://python.sourceforge.net/peps/pep-0003.html


  Copyright
  
    This document has been placed in the public domain.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"The overexamined life sure is boring."  --Loyal Mini Onion



From tismer at tismer.com  Thu Mar 15 12:30:09 2001
From: tismer at tismer.com (Christian Tismer)
Date: Thu, 15 Mar 2001 12:30:09 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103141541.QAA03543@core.inf.ethz.ch>
Message-ID: <3AB0A7C1.B86E63F2@tismer.com>


Samuele Pedroni wrote:
> 
> Hi.
> 
> First of all I should admit I ignore what have been discussed
> at IPC9 about Stackless Python.

This would have answered your question.

> My plain question (as jython developer): is there a real intention
> to make python stackless in the short term (2.2, 2.3...)

Yes.

> AFAIK then for jython there are three option:
> 1 - Just don't care
> 2 - A major rewrite with performance issues (but AFAIK nobody has
>   the resources for doing that)
> 3 - try to implement some of the highlevel offered features through threads
>    (which could be pointless from a performance point of view:
>      e.g. microthreads trough threads, not that nice).
> 
> The option are 3 just for the theoretical sake of compatibility
> (I don't see the point to port python stackless based code to jython)
>  or 1 plus some amount of frustration <wink>. Am I missing something?
> 
> The problem will be more serious if the std lib will begin to use
> heavily the stackless features.

Option 1 would be even fine with me. I would make all
Stackless features optional, not enforcing them for the
language.

Option 2 doesn't look reasonable. We cannot switch
microthreads without changing the VM. In CPython,
the VM is available, in Jython it is immutable.
The only way I would see is to turn Jython into
an interpreter instead of producing VM code. That
would do, but at an immense performance cost.

Option 3 is Guido's view of a compatibility layer.
Microthreads can be simulated by threads in fact.
This is slow, but compatible, making stuff just work.
Most probably this version is performing better than
option 2.

I don't believe that the library will become a problem,
if modifications are made with Jython in mind.

Personally, I'm not convinced that any of these will make
Jython users happy. The concurrency domain will in
fact be dominated by CPython, since one of the best
features of Uthreads is incredible speed and small size.
But this is similar to a couple of extensions for CPython
which are just not available for Jython.

I tried hard to find out how to make Jython Stackless.
There was no way yet, I'm very very sorry!
On the other hand I don't think
that Jython should play the showstopper for a technology
that people really want. Including the stackless machinery
into Python without enforcing it would be my way.
Parallel stuff can sit in an extension module.
Of course there will be a split of modules which don't
work in Jython, or which are less efficient in Jython.
But if efficiency is the demand, Jython wouldn't be
the right choice, anyway.

regards - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From guido at digicool.com  Thu Mar 15 12:55:56 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 06:55:56 -0500
Subject: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Thu, 15 Mar 2001 17:02:24 +1300."
             <200103150402.RAA05333@s454.cosc.canterbury.ac.nz> 
References: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103151155.GAA07429@cj20424-a.reston1.va.home.com>

I'll say one thing and then I'll try to keep my peace about this.

I think that using rationals as the default type for
decimal-with-floating-point notation won't fly.  There are too many
issues, e.g. performance, rounding on display, usability for advanced
users, backwards compatibility.  This means that it just isn't
possible to get a consensus about moving in this direction.

Using decimal floating point won't fly either, for mostly the same
reasons, plus the implementation appears to be riddled with gotcha's
(at least rationals are relatively clean and easy to implement, given
that we already have bignums).

I don't think I have the time or energy to argue this much further --
someone will have to argue until they have a solution that the various
groups (educators, scientists, and programmers) can agree on.  Maybe
language levels will save the world?

That leaves three topics as potential low-hanging fruit:

- Integer unification (PEP 237).  It's mostly agreed that plain ints
  and long ints should be unified.  Simply creating a long where we
  currently overflow would be the easiest route; it has some problems
  (it's not 100% seamless) but I think it's usable and I see no real
  disadvantages.

- Number unification.  This is more controversial, but I believe less
  so than rationals or decimal f.p.  It would remove all semantic
  differences between "1" and "1.0", and therefore 1/2 would return
  0.5.  The latter is separately discussed in PEP 238, but I now
  believe this should only be done as part of a general unification.
  Given my position on decimal f.p. and rationals, this would mean an
  approximate, binary f.p. result for 1/3, and this does not seem to
  have the support of the educators (e.g. Jeff Elkner is strongly
  opposed to teaching floats at all).  But other educators (e.g. Randy
  Pausch, and the folks who did VPython) strongly recommend this based
  on user observation, so there's hope.  As a programmer, as long as
  there's *some* way to spell integer division (even div(i, j) will
  do), I don't mind.  The breakage of existig code will be great so
  we'll be forced to introduce this gradually using a future_statement
  and warnings.

- "Kinds", as proposed by Paul Dubois.  This doesn't break existing
  code or change existing semantics, it just adds more control for
  those who want it.  I think this might just work.  Will someone
  kindly help Paul get this in PEP form?

PS.  Moshe, please check in your PEPs.  They need to be on-line.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tismer at tismer.com  Thu Mar 15 13:41:07 2001
From: tismer at tismer.com (Christian Tismer)
Date: Thu, 15 Mar 2001 13:41:07 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103150031.NAA05310@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB0B863.52DFB61C@tismer.com>


Greg Ewing wrote:
> 
> Christian Tismer <tismer at tismer.com>:
> 
> > You can *create* a thread using a callback.
> 
> Okay, that's not so bad. (An earlier message seemed to
> be saying that you couldn't even do that.)
> 
> But what about GUIs such as Tkinter which have a
> main loop in C that keeps control for the life of
> the program? You'll never get back to the base-level
> interpreter, not even between callbacks, so how do
> the uthreads get scheduled?

This would not work. One simple thing I could think of is
to let the GUI live in an OS thread, and have another
thread for all the microthreads.
More difficult but maybe better: A C main loop which
doesn't run an interpreter will block otherwise. But
most probably, it will run interpreters from time to time.
These can be told to take the scheduling role on.
It does not matter on which interpreter level we are,
we just can't switch to frames of other levels. But
even leaving a frame chain, and re-entering later
with a different stack level is no problem.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From paulp at ActiveState.com  Thu Mar 15 14:30:52 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Thu, 15 Mar 2001 05:30:52 -0800
Subject: [Python-Dev] Before it was called Stackless....
Message-ID: <3AB0C40C.54CAA328@ActiveState.com>

http://www.python.org/workshops/1995-05/WIP.html

I found Guido's "todo list" from 1995. 

	Move the C stack out of the way 

It may be possible to implement Python-to-Python function and method
calls without pushing a C stack frame. This has several advantages -- it
could be more efficient, it may be possible to save and restore the
Python stack to enable migrating programs, and it may be possible to
implement multiple threads without OS specific support (the latter is
questionable however, since it would require a solution for all blocking
system calls). 



-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From tim.one at home.com  Thu Mar 15 16:31:57 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 15 Mar 2001 10:31:57 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103151155.GAA07429@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>

[Guido]
> I'll say one thing and then I'll try to keep my peace about this.

If this was one thing, you're suffering major roundoff error <wink>.

> I think that using rationals as the default type for
> decimal-with-floating-point notation won't fly.  There are too many
> issues, e.g. performance, rounding on display, usability for advanced
> users, backwards compatibility.  This means that it just isn't
> possible to get a consensus about moving in this direction.

Agreed.

> Using decimal floating point won't fly either,

If you again mean "by default", also agreed.

> for mostly the same reasons, plus the implementation appears to
> be riddled with gotcha's

It's exactly as difficult or easy as implementing binary fp in software; see
yesterday's link to Cowlishaw's work for detailed pointers; and as I said
before, Cowlishaw earlier agreed (years ago) to let Python use REXX's
implementation code.

> (at least rationals are relatively clean and easy to implement, given
> that we already have bignums).

Oddly enough, I believe rationals are more code in the end (e.g., my own
Rational package is about 3000 lines of Python, but indeed is so general it
subsumes IEEE 854 (the decimal variant of IEEE 754) except for Infs and
NaNs) -- after you add rounding facilities to Rationals, they're as hairy as
decimal fp.

> I don't think I have the time or energy to argue this much further --
> someone will have to argue until they have a solution that the various
> groups (educators, scientists, and programmers) can agree on.  Maybe
> language levels will save the world?

A per-module directive specifying the default interpretation of fp literals
within the module is an ugly but workable possibility.

> That leaves three topics as potential low-hanging fruit:
>
> - Integer unification (PEP 237).  It's mostly agreed that plain ints
>   and long ints should be unified.  Simply creating a long where we
>   currently overflow would be the easiest route; it has some problems
>   (it's not 100% seamless) but I think it's usable and I see no real
>   disadvantages.

Good!

> - Number unification.  This is more controversial, but I believe less
>   so than rationals or decimal f.p.  It would remove all semantic
>   differences between "1" and "1.0", and therefore 1/2 would return
>   0.5.

The only "number unification" PEP on the table does not remove all semantic
differences:  1.0 is tagged as inexact under Moshe's PEP, but 1 is not.  So
this is some other meaning for unification.  Trying to be clear.

>   The latter is separately discussed in PEP 238, but I now believe
>   this should only be done as part of a general unification.
>   Given my position on decimal f.p. and rationals, this would mean an
>   approximate, binary f.p. result for 1/3, and this does not seem to
>   have the support of the educators (e.g. Jeff Elkner is strongly
>   opposed to teaching floats at all).

I think you'd have a very hard time finding any pre-college level teacher who
wants to teach binary fp.  Your ABC experience is consistent with that too.

>  But other educators (e.g. Randy Pausch, and the folks who did
> VPython) strongly recommend this based on user observation, so there's
> hope.

Alice is a red herring!  What they wanted was for 1/2 *not* to mean 0.  I've
read the papers and dissertations too -- there was no plea for binary fp in
those, just that division not throw away info.  The strongest you can claim
using these projects as evidence is that binary fp would be *adequate* for a
newbie graphics application.  And I'd agree with that.  But graphics is a
small corner of education, and either rationals or decimal fp would also be
adequate for newbie graphics.

>   As a programmer, as long as there's *some* way to spell integer
>   division (even div(i, j) will do), I don't mind.

Yes, I need that too.

>   The breakage of existig code will be great so we'll be forced to
>   introduce this gradually using a future_statement and warnings.
>
> - "Kinds", as proposed by Paul Dubois.  This doesn't break existing
>   code or change existing semantics, it just adds more control for
>   those who want it.  I think this might just work.  Will someone
>   kindly help Paul get this in PEP form?

I will.

> PS.  Moshe, please check in your PEPs.  They need to be on-line.

Absolutely.




From pedroni at inf.ethz.ch  Thu Mar 15 16:39:18 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Thu, 15 Mar 2001 16:39:18 +0100 (MET)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
Message-ID: <200103151539.QAA01573@core.inf.ethz.ch>

Hi.

[Christian Tismer]
> Samuele Pedroni wrote:
> > 
> > Hi.
> > 
> > First of all I should admit I ignore what have been discussed
> > at IPC9 about Stackless Python.
> 
> This would have answered your question.
> 
> > My plain question (as jython developer): is there a real intention
> > to make python stackless in the short term (2.2, 2.3...)
> 
> Yes.
Now I know <wink>.

 > > AFAIK then for jython there are three option:
> > 1 - Just don't care
> > 2 - A major rewrite with performance issues (but AFAIK nobody has
> >   the resources for doing that)
> > 3 - try to implement some of the highlevel offered features through threads
> >    (which could be pointless from a performance point of view:
> >      e.g. microthreads trough threads, not that nice).
> > 
> > The option are 3 just for the theoretical sake of compatibility
> > (I don't see the point to port python stackless based code to jython)
> >  or 1 plus some amount of frustration <wink>. Am I missing something?
> > 
> > The problem will be more serious if the std lib will begin to use
> > heavily the stackless features.
> 
> Option 1 would be even fine with me. I would make all
> Stackless features optional, not enforcing them for the
> language.
> Option 2 doesn't look reasonable. We cannot switch
> microthreads without changing the VM. In CPython,
> the VM is available, in Jython it is immutable.
> The only way I would see is to turn Jython into
> an interpreter instead of producing VM code. That
> would do, but at an immense performance cost.
To be honest each python method invocation take such a tour
in jython that maybe the cost would not be that much, but
we will loose the smooth java and jython integration and
the possibility of having jython applets...
so it is a no-go and nobody has time for doing that.

> 
> Option 3 is Guido's view of a compatibility layer.
> Microthreads can be simulated by threads in fact.
> This is slow, but compatible, making stuff just work.
> Most probably this version is performing better than
> option 2.
On the long run that could find a natural solution, at least
wrt to uthreads, java is having some success on the server side,
and there is some ongoing research on writing jvms with their
own scheduled lightweight threads, such that a larger amount
of threads can be handled in a smoother way.

> I don't believe that the library will become a problem,
> if modifications are made with Jython in mind.
I was thinking about stuff like generators used everywhere,
but that is maybe just uninformed panicing. They are the
kind of stuff that make programmers addictive <wink>.

> 
> Personally, I'm not convinced that any of these will make
> Jython users happy. 
If they will not be informed, they just won't care <wink>

> I tried hard to find out how to make Jython Stackless.
> There was no way yet, I'm very very sorry!
You were trying something impossible <wink>,
the smooth integration with java is the big win of jython,
there is no way of making it stackless and preserving that.

> On the other hand I don't think
> that Jython should play the showstopper for a technology
> that people really want. 
Fine for me.

> Including the stackless machinery
> into Python without enforcing it would be my way.
> Parallel stuff can sit in an extension module.
> Of course there will be a split of modules which don't
> work in Jython, or which are less efficient in Jython.
> But if efficiency is the demand, Jython wouldn't be
> the right choice, anyway.
And python without C isn't that either.
All the dynamic optimisation technology behind the jvm make it outperform
the pvm for things light tight loops, etc.
And jython can't exploit any of that, because python is too dynamic,
sometimes even in spurious ways.

In different ways they (java,python,... ) all are good approximations of the
Right Thing without being it, for different reasons.
(just a bit of personal frustration ;))

regards.




From guido at digicool.com  Thu Mar 15 16:42:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 10:42:32 -0500
Subject: [Python-Dev] Re: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Thu, 15 Mar 2001 10:31:57 EST."
             <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com> 
Message-ID: <200103151542.KAA09191@cj20424-a.reston1.va.home.com>

> I think you'd have a very hard time finding any pre-college level teacher who
> wants to teach binary fp.  Your ABC experience is consistent with that too.

"Want to", no.  But whether they're teaching Java, C++, or Pascal,
they have no choice: if they need 0.5, they'll need binary floating
point, whether they explain it adequately or not.  Possibly they are
all staying away from the decimal point completely, but I find that
hard to believe.

> >  But other educators (e.g. Randy Pausch, and the folks who did
> > VPython) strongly recommend this based on user observation, so there's
> > hope.
> 
> Alice is a red herring!  What they wanted was for 1/2 *not* to mean 0.  I've
> read the papers and dissertations too -- there was no plea for binary fp in
> those, just that division not throw away info.

I never said otherwise.  It just boils down to binary fp as the only
realistic choice.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Thu Mar 15 17:31:34 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Thu, 15 Mar 2001 17:31:34 +0100
Subject: [Python-Dev] Re: WYSIWYG decimal fractions
References: <200103150402.RAA05333@s454.cosc.canterbury.ac.nz> <200103151155.GAA07429@cj20424-a.reston1.va.home.com>
Message-ID: <3AB0EE66.37E6C633@lemburg.com>

Guido van Rossum wrote:
> 
> I'll say one thing and then I'll try to keep my peace about this.
> 
> I think that using rationals as the default type for
> decimal-with-floating-point notation won't fly.  There are too many
> issues, e.g. performance, rounding on display, usability for advanced
> users, backwards compatibility.  This means that it just isn't
> possible to get a consensus about moving in this direction.
> 
> Using decimal floating point won't fly either, for mostly the same
> reasons, plus the implementation appears to be riddled with gotcha's
> (at least rationals are relatively clean and easy to implement, given
> that we already have bignums).
> 
> I don't think I have the time or energy to argue this much further --
> someone will have to argue until they have a solution that the various
> groups (educators, scientists, and programmers) can agree on.  Maybe
> language levels will save the world?

Just out of curiosity: is there a usable decimal type implementation
somewhere on the net which we could beat on ?

I for one would be very interested in having a decimal type
around (with fixed precision and scale), since databases rely
on these a lot and I would like to assure that passing database
data through Python doesn't cause any data loss due to rounding
issues.

If there aren't any such implementations yet, the site that Tim 
mentioned  looks like a good starting point for heading into this 
direction... e.g. for mx.Decimal ;-)

	http://www2.hursley.ibm.com/decimal/

I believe that now with the coercion patches in place, adding
new numeric datatypes should be fairly easy (left aside the
problems intrinsic to numerics themselves).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 17:30:49 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 17:30:49 +0100
Subject: [Python-Dev] Patch Manager Guidelines
Message-ID: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>

It appears that the Patch Manager Guidelines
(http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
tracker tool anymore. They claim that the status of the patch can be
Open, Accepted, Closed, etc - which is not true: the status can be
only Open, Closed, or Deleted; Accepted is a value of Resolution.

I have to following specific questions: If a patch is accepted, should
it be closed also? If so, how should the resolution change if it is
also committed?

Curious,
Martin



From fdrake at acm.org  Thu Mar 15 17:35:19 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Thu, 15 Mar 2001 11:35:19 -0500 (EST)
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
References: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de>
Message-ID: <15024.61255.797524.736810@localhost.localdomain>

Martin v. Loewis writes:
 > It appears that the Patch Manager Guidelines
 > (http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
 > tracker tool anymore. They claim that the status of the patch can be
 > Open, Accepted, Closed, etc - which is not true: the status can be
 > only Open, Closed, or Deleted; Accepted is a value of Resolution.

  Thanks for pointing this out!

 > I have to following specific questions: If a patch is accepted, should
 > it be closed also? If so, how should the resolution change if it is
 > also committed?

  I've been setting a patch to accepted-but-open if it needs to be
checked in, and then closing it once the checkin has been made.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Thu Mar 15 17:44:54 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 11:44:54 -0500
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: Your message of "Thu, 15 Mar 2001 17:30:49 +0100."
             <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de> 
References: <200103151630.f2FGUnh01478@mira.informatik.hu-berlin.de> 
Message-ID: <200103151644.LAA09360@cj20424-a.reston1.va.home.com>

> It appears that the Patch Manager Guidelines
> (http://python.sourceforge.net/sf-faq.html#a1) don't work with the new
> tracker tool anymore. They claim that the status of the patch can be
> Open, Accepted, Closed, etc - which is not true: the status can be
> only Open, Closed, or Deleted; Accepted is a value of Resolution.
> 
> I have to following specific questions: If a patch is accepted, should
> it be closed also? If so, how should the resolution change if it is
> also committed?

A patch should only be closed after it has been committed; otherwise
it's too easy to lose track of it.  So I guess the proper sequence is

1. accept; Resolution set to Accepted

2. commit; Status set to Closed

I hope the owner of the sf-faq document can fix it.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 18:22:41 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 18:22:41 +0100
Subject: [Python-Dev] Preparing 2.0.1
Message-ID: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>

I've committed a few changes to the 2.0 release branch, and I'd
propose to follow the following procedure when doing so:

- In the checkin message, indicate which file version from the
  mainline is being copied into the release branch.

- In Misc/NEWS, indicate what bugs have been fixed by installing these
  patches. If it was a patch in response to a SF bug report, listing
  the SF bug id should be sufficient; I've put some instructions into
  Misc/NEWS on how to retrieve the bug report for a bug id.

I'd also propose that 2.0.1, at a minimum, should contain the patches
listed on the 2.0 MoinMoin

http://www.python.org/cgi-bin/moinmoin

I've done so only for the _tkinter patch, which was both listed as
critical, and which closed 2 SF bug reports. I've verified that the
sre_parse patch also closes a number of SF bug reports, but have not
copied it to the release branch.

Please let me know what you think.

Martin



From guido at digicool.com  Thu Mar 15 18:39:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 12:39:32 -0500
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: Your message of "Thu, 15 Mar 2001 18:22:41 +0100."
             <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> 
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> 
Message-ID: <200103151739.MAA09627@cj20424-a.reston1.va.home.com>

Excellent, Martin!

There's way more by way of patches that we *could* add than the
MoinMoin Wiki though.

I hope that somebody has the time to wade through the 2.1 code to look
for gems.  These should all be *pure* bugfixes!

I haven't seen Aahz' PEP in detail yet; I don't hope there's a
requirement that 2.0.1 come out before 2.1?  The licensing stuff may
be holding 2.0.1 up. :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at effbot.org  Thu Mar 15 19:15:17 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Thu, 15 Mar 2001 19:15:17 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
Message-ID: <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid>

Martin wrote:
> I've verified that the sre_parse patch also closes a number of SF
> bug reports, but have not copied it to the release branch.

it's probably best to upgrade to the current SRE code base.

also, it would make sense to bump makeunicodedata.py to 1.8,
and regenerate the unicode database (this adds 38,642 missing
unicode characters).

I'll look into this this weekend, if I find the time.

Cheers /F




From mwh21 at cam.ac.uk  Thu Mar 15 19:28:48 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Thu, 15 Mar 2001 18:28:48 +0000 (GMT)
Subject: [Python-Dev] python-dev summary, 2001-03-01 - 2001-03-15
Message-ID: <Pine.LNX.4.10.10103151820200.24973-100000@localhost.localdomain>

 This is a summary of traffic on the python-dev mailing list between
 Mar 1 and Mar 14 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list at python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the third python-dev summary written by Michael Hudson.
 Previous summaries were written by Andrew Kuchling and can be found
 at:

   <http://www.amk.ca/python/dev/>

 New summaries will appear at:

  <http://starship.python.net/crew/mwh/summaries/>

 and will continue to be archived at Andrew's site.

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 264

    50 |                                             ]|[        
       |                                             ]|[        
       |                                             ]|[        
       |                                             ]|[        
    40 | ]|[                                         ]|[        
       | ]|[                                         ]|[        
       | ]|[                                         ]|[        
       | ]|[                                         ]|[ ]|[    
    30 | ]|[                                         ]|[ ]|[    
       | ]|[                                         ]|[ ]|[    
       | ]|[                                         ]|[ ]|[ ]|[
       | ]|[                                         ]|[ ]|[ ]|[
    20 | ]|[                                         ]|[ ]|[ ]|[
       | ]|[ ]|[                                     ]|[ ]|[ ]|[
       | ]|[ ]|[                                     ]|[ ]|[ ]|[
       | ]|[ ]|[                                 ]|[ ]|[ ]|[ ]|[
    10 | ]|[ ]|[ ]|[                             ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[     ]|[                     ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[     ]|[ ]|[                 ]|[ ]|[ ]|[ ]|[
       | ]|[ ]|[ ]|[ ]|[ ]|[ ]|[ ]|[     ]|[ ]|[ ]|[ ]|[ ]|[ ]|[
     0 +-050-022-012-004-009-006-003-002-003-005-017-059-041-031
        Thu 01| Sat 03| Mon 05| Wed 07| Fri 09| Sun 11| Tue 13|
            Fri 02  Sun 04  Tue 06  Thu 08  Sat 10  Mon 12  Wed 14

 A quiet fortnight on python-dev; the conference a week ago is
 responsible for some of that, but also discussion has been springing
 up on other mailing lists (including the types-sig, doc-sig,
 python-iter and stackless lists, and those are just the ones your
 author is subscribed to).


   * Bug Fix Releases *

 Aahz posted a proposal for a 2.0.1 release, fixing the bugs that have
 been found in 2.0 but not adding the new features.

  <http://mail.python.org/pipermail/python-dev/2001-March/013389.html>

 Guido's response was, essentially, "Good idea, but I don't have the
 time to put into it", and that the wider community would have to put
 in some of the donkey work if this is going to happen.  Signs so far
 are encouraging.


    * Numerics *

 Moshe Zadka posted three new PEP-drafts:

  <http://mail.python.org/pipermail/python-dev/2001-March/013435.html>

 which on discussion became four new PEPs, which are not yet online
 (hint, hint).

 The four titles are

    Unifying Long Integers and Integers
    Non-integer Division
    Adding a Rational Type to Python
    Adding a Rational Literal to Python

 and they will appear fairly soon at

  <http://python.sourceforge.net/peps/pep-0237.html>
  <http://python.sourceforge.net/peps/pep-0238.html>
  <http://python.sourceforge.net/peps/pep-0239.html>
  <http://python.sourceforge.net/peps/pep-0240.html>

 respectively.

 Although pedantically falling slightly out of the remit of this
 summary, I should mention Guido's partial BDFL pronouncement:

  <http://mail.python.org/pipermail/python-dev/2001-March/013587.html>

 A new mailing list had been setup to discuss these issues:

  <http://lists.sourceforge.net/lists/listinfo/python-numerics>


    * Revive the types-sig? *

 Paul Prescod has single-handedly kicked the types-sig into life
 again.

  <http://mail.python.org/sigs/types-sig/>

 The discussion this time seems to be centered on interfaces and how to
 use them effectively.  You never know, we might get somewhere this
 time!

    * stackless *

 Jeremy Hylton posted some comments on Gordon McMillan's new draft of
 the stackless PEP (PEP 219) and the stackless dev day discussion at
 Spam 9.

  <http://mail.python.org/pipermail/python-dev/2001-March/013494.html>

 The discussion has mostly focussed on technical issues; there has
 been no comment on if or when the core Python will become stackless.


    * miscellanea *

 There was some discussion on nested scopes, but mainly on
 implementation issues.  Thomas Wouters promised <wink> to sort out
 the "continue in finally: clause" wart.

Cheers,
M.




From esr at golux.thyrsus.com  Thu Mar 15 19:35:30 2001
From: esr at golux.thyrsus.com (Eric)
Date: Thu, 15 Mar 2001 10:35:30 -0800
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: <200103142305.SAA05872@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Wed, Mar 14, 2001 at 06:05:50PM -0500
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com>
Message-ID: <20010315103530.C1530@thyrsus.com>

Guido van Rossum <guido at digicool.com>:
> > I have fixed some obvious errors (use of the deprecated 'cmp' module;
> > use of regex) but I have encountered run-time errors that are beyond
> > my competence to fix.  From a cursory inspection of the code it looks
> > to me like the freeze tools need adaptation to the new
> > distutils-centric build process.
> 
> The last maintainers were me and Mark Hammond, but neither of us has
> time to look into this right now.  (At least I know I don't.)
> 
> What kind of errors do you encounter?

After cleaning up the bad imports, use of regex, etc, first thing I see
is an assertion failure in the module finder.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

"They that can give up essential liberty to obtain a little temporary 
safety deserve neither liberty nor safety."
	-- Benjamin Franklin, Historical Review of Pennsylvania, 1759.



From guido at digicool.com  Thu Mar 15 19:49:21 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 13:49:21 -0500
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: Your message of "Thu, 15 Mar 2001 10:35:30 PST."
             <20010315103530.C1530@thyrsus.com> 
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com>  
            <20010315103530.C1530@thyrsus.com> 
Message-ID: <200103151849.NAA09878@cj20424-a.reston1.va.home.com>

> > What kind of errors do you encounter?
> 
> After cleaning up the bad imports, use of regex, etc, first thing I see
> is an assertion failure in the module finder.

Are you sure you are using the latest CVS version of freeze?  I didn't
have to clean up any bad imports -- it just works for me.  But maybe
I'm not using all the features?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 19:49:37 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 19:49:37 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> (fredrik@effbot.org)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid>
Message-ID: <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de>

> it's probably best to upgrade to the current SRE code base.

I'd be concerned about the "pure bugfix" nature of the current SRE
code base. It is probably minor things, like the addition of

+    PyDict_SetItemString(
+        d, "MAGIC", (PyObject*) PyInt_FromLong(SRE_MAGIC)
+        );

+# public symbols
+__all__ = [ "match", "search", "sub", "subn", "split", "findall",
+    "compile", "purge", "template", "escape", "I", "L", "M", "S", "X",
+    "U", "IGNORECASE", "LOCALE", "MULTILINE", "DOTALL", "VERBOSE",
+    "UNICODE", "error" ]
+

+DEBUG = sre_compile.SRE_FLAG_DEBUG # dump pattern after compilation

-    def getgroup(self, name=None):
+    def opengroup(self, name=None):

The famous last words here are "those changes can do no
harm". However, somebody might rely on Pattern objects having a
getgroup method (even though it is not documented). Some code (relying
on undocumented features) may break with 2.1, which is acceptable; it
is not acceptable for a bugfix release.

For the bugfix release, I'd feel much better if a clear set of pure
bug fixes were identified, along with a list of bugs they fix. So "no
new feature" would rule out "no new constant named MAGIC" (*).

If a "pure bugfix" happens to break something as well, we can atleast
find out what it fixed in return, and then probably find that the fix
justified the breakage.

Regards,
Martin

(*) There are also new constants AT_BEGINNING_STRING, but it appears
that it was introduced as response to a bug report.



From esr at golux.thyrsus.com  Thu Mar 15 19:54:17 2001
From: esr at golux.thyrsus.com (Eric)
Date: Thu, 15 Mar 2001 10:54:17 -0800
Subject: [Python-Dev] freeze is broken in 2.x
In-Reply-To: <200103151849.NAA09878@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 15, 2001 at 01:49:21PM -0500
References: <E14cx6s-0002zN-00@golux.thyrsus.com> <200103142305.SAA05872@cj20424-a.reston1.va.home.com> <20010315103530.C1530@thyrsus.com> <200103151849.NAA09878@cj20424-a.reston1.va.home.com>
Message-ID: <20010315105417.J1530@thyrsus.com>

Guido van Rossum <guido at digicool.com>:
> Are you sure you are using the latest CVS version of freeze?  I didn't
> have to clean up any bad imports -- it just works for me.  But maybe
> I'm not using all the features?

I'll cvs update and check.  Thanks.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Still, if you will not fight for the right when you can easily
win without bloodshed, if you will not fight when your victory
will be sure and not so costly, you may come to the moment when
you will have to fight with all the odds against you and only a
precarious chance for survival. There may be a worse case.  You
may have to fight when there is no chance of victory, because it
is better to perish than to live as slaves.
	--Winston Churchill



From skip at pobox.com  Thu Mar 15 20:14:59 2001
From: skip at pobox.com (Skip Montanaro)
Date: Thu, 15 Mar 2001 13:14:59 -0600 (CST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103150614.BAA04221@panix6.panix.com>
References: <200103150614.BAA04221@panix6.panix.com>
Message-ID: <15025.5299.651586.244121@beluga.mojam.com>

    aahz> Starting with Python 2.0, all feature releases are required to
    aahz> have the form X.Y; patch releases will always be of the form
    aahz> X.Y.Z.  To clarify the distinction between a bug fix release and a
    aahz> patch release, all non-bug fix patch releases will have the suffix
    aahz> "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
    aahz> bug fix release; and "2.1.2p" is a patch release that contains
    aahz> minor feature enhancements.

I don't understand the need for (or fundamental difference between) bug fix
and patch releases.  If 2.1 is the feature release and 2.1.1 is a bug fix
release, is 2.1.2p a branch off of 2.1.2 or 2.1.1?

    aahz> The Patch Czar is the counterpart to the BDFL for patch releases.
    aahz> However, the BDFL and designated appointees retain veto power over
    aahz> individual patches and the decision of whether to label a patch
    aahz> release as a bug fix release.

I propose that instead of (or in addition to) the Patch Czar you have a
Release Shepherd (RS) for each feature release, presumably someone motivated
to help maintain that particular release.  This person (almost certainly
someone outside PythonLabs) would be responsible for the bug fix releases
associated with a single feature release.  Your use of 2.1's sre as a "small
feature change" for 2.0 and 1.5.2 is an example where having an RS for each
feature release would be worthwhile.  Applying sre 2.1 to the 2.0 source
would probably be reasonably easy.  Adding it to 1.5.2 would be much more
difficult (no Unicode), and so would quite possibly be accepted by the 2.0
RS and rejected by the 1.5.2 RS.

As time passes, interest in further bug fix releases for specific feature
releases will probably wane.  When interest drops far enough the RS could
simply declare that branch closed and move on to other things.

I envision the Patch Czar voting a general yea or nay on a specific patch,
then passing it along to all the current RSs, who would make the final
decision about whether that patch is appropriate for the release they are
managing.

I suggest dumping the patch release concept and just going with bug fix
releases.  The system will be complex enough without them.  If it proves
desirable later, you can always add them.

Skip



From fredrik at effbot.org  Thu Mar 15 20:25:45 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Thu, 15 Mar 2001 20:25:45 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de>
Message-ID: <03d101c0ad85$bc812610$e46940d5@hagrid>

martin wrote:

> I'd be concerned about the "pure bugfix" nature of the current SRE
> code base. 

well, unlike you, I wrote the code.

> -    def getgroup(self, name=None):
> +    def opengroup(self, name=None):
> 
> The famous last words here are "those changes can do no
> harm". However, somebody might rely on Pattern objects having a
> getgroup method (even though it is not documented).

it may sound weird, but I'd rather support people who rely on regular
expressions working as documented...

> For the bugfix release, I'd feel much better if a clear set of pure
> bug fixes were identified, along with a list of bugs they fix. So "no
> new feature" would rule out "no new constant named MAGIC" (*).

what makes you so sure that MAGIC wasn't introduced to deal with
a bug report?  (hint: it was)

> If a "pure bugfix" happens to break something as well, we can atleast
> find out what it fixed in return, and then probably find that the fix
> justified the breakage.

more work, and far fewer bugs fixed.  let's hope you have lots of
volunteers lined up...

Cheers /F




From fredrik at pythonware.com  Thu Mar 15 20:43:11 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 15 Mar 2001 20:43:11 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
References: <200103150614.BAA04221@panix6.panix.com> <15025.5299.651586.244121@beluga.mojam.com>
Message-ID: <000f01c0ad88$2cd4b970$e46940d5@hagrid>

skip wrote:
> I suggest dumping the patch release concept and just going with bug fix
> releases.  The system will be complex enough without them.  If it proves
> desirable later, you can always add them.

agreed.

> Applying sre 2.1 to the 2.0 source would probably be reasonably easy.
> Adding it to 1.5.2 would be much more difficult (no Unicode), and so
> would quite possibly be accepted by the 2.0 RS and rejected by the
> 1.5.2 RS.

footnote: SRE builds and runs just fine under 1.5.2:

    http://www.pythonware.com/products/sre

Cheers /F




From thomas.heller at ion-tof.com  Thu Mar 15 21:00:19 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Thu, 15 Mar 2001 21:00:19 +0100
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
Message-ID: <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>

[Martin v. Loewis]
> I'd also propose that 2.0.1, at a minimum, should contain the patches
> listed on the 2.0 MoinMoin
> 
> http://www.python.org/cgi-bin/moinmoin
> 
So how should requests for patches be submitted?
Should I enter them into the wiki, post to python-dev,
email to aahz?

I would kindly request two of the fixed bugs I reported to
go into 2.0.1:

Bug id 231064, sys.path not set correctly in embedded python interpreter
Bug id 221965, 10 in xrange(10) returns 1
(I would consider the last one as critical)

Thomas




From aahz at pobox.com  Thu Mar 15 21:11:31 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 12:11:31 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Thomas Heller" at Mar 15, 2001 09:00:19 PM
Message-ID: <200103152011.PAA28835@panix3.panix.com>

> So how should requests for patches be submitted?
> Should I enter them into the wiki, post to python-dev,
> email to aahz?

As you'll note in PEP 6, this is one of the issues that needs some
resolving.  The correct solution long-term will likely involve some
combination of a new mailing list (so python-dev doesn't get overwhelmed)
and SourceForge bug management.  In the meantime, I'm keeping a record.

Part of the problem in simply moving forward is that I am neither on
python-dev myself nor do I have CVS commit privileges; I'm also not much
of a C programmer.  Thomas Wouters and Jeremy Hylton have made statements
that could be interpreted as saying that they're willing to be the Patch
Czar, but while I assume that either would be passed by acclamation, I'm
certainly not going to shove it on them.  If either accepts, I'll be glad
to take on whatever administrative tasks they ask for.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"The overexamined life sure is boring."  --Loyal Mini Onion



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 21:39:14 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 21:39:14 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <03d101c0ad85$bc812610$e46940d5@hagrid> (fredrik@effbot.org)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <02fc01c0ad7b$e72c0ba0$e46940d5@hagrid> <200103151849.f2FInbd02760@mira.informatik.hu-berlin.de> <03d101c0ad85$bc812610$e46940d5@hagrid>
Message-ID: <200103152039.f2FKdEQ22768@mira.informatik.hu-berlin.de>

> > I'd be concerned about the "pure bugfix" nature of the current SRE
> > code base. 
> 
> well, unlike you, I wrote the code.

I am aware of that. My apologies if I suggested otherwise.

> it may sound weird, but I'd rather support people who rely on regular
> expressions working as documented...

That is not weird at all.

> > For the bugfix release, I'd feel much better if a clear set of pure
> > bug fixes were identified, along with a list of bugs they fix. So "no
> > new feature" would rule out "no new constant named MAGIC" (*).
> 
> what makes you so sure that MAGIC wasn't introduced to deal with
> a bug report?  (hint: it was)

I am not sure. What was the bug report that caused its introduction?

> > If a "pure bugfix" happens to break something as well, we can atleast
> > find out what it fixed in return, and then probably find that the fix
> > justified the breakage.
> 
> more work, and far fewer bugs fixed.  let's hope you have lots of
> volunteers lined up...

Nobody has asked *you* to do that work. If you think your time is
better spent in fixing existing bugs instead of back-porting the fixes
to 2.0 - there is nothing wrong with that at all. It all depends on
what the volunteers are willing to do.

Regards,
Martin



From guido at digicool.com  Thu Mar 15 22:14:16 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 15 Mar 2001 16:14:16 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Thu, 15 Mar 2001 20:43:11 +0100."
             <000f01c0ad88$2cd4b970$e46940d5@hagrid> 
References: <200103150614.BAA04221@panix6.panix.com> <15025.5299.651586.244121@beluga.mojam.com>  
            <000f01c0ad88$2cd4b970$e46940d5@hagrid> 
Message-ID: <200103152114.QAA10305@cj20424-a.reston1.va.home.com>

> skip wrote:
> > I suggest dumping the patch release concept and just going with bug fix
> > releases.  The system will be complex enough without them.  If it proves
> > desirable later, you can always add them.
> 
> agreed.

+1

> > Applying sre 2.1 to the 2.0 source would probably be reasonably easy.
> > Adding it to 1.5.2 would be much more difficult (no Unicode), and so
> > would quite possibly be accepted by the 2.0 RS and rejected by the
> > 1.5.2 RS.
> 
> footnote: SRE builds and runs just fine under 1.5.2:
> 
>     http://www.pythonware.com/products/sre

In the specific case of SRE, I'm +1 on keeping the code base in 2.0.1
completely synchronized with 2.1.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Thu Mar 15 22:32:47 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Thu, 15 Mar 2001 22:32:47 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
	(thomas.heller@ion-tof.com)
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de> <0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
Message-ID: <200103152132.f2FLWlE29312@mira.informatik.hu-berlin.de>

> So how should requests for patches be submitted?
> Should I enter them into the wiki, post to python-dev,
> email to aahz?

Personally, I think 2.0.1 should be primarily driven by user requests;
I think this is also the spirit of the PEP. I'm not even sure that
going over the entire code base systematically and copying all bug
fixes is a good idea.

In that sense, having somebody collect these requests is probably the
right approach. In this specific case, I'll take care of them, unless
somebody else proposes a different procedure. For the record, you are
requesting inclusion of

rev 1.23 of PC/getpathp.c
rev 2.21, 2.22 of Objects/rangeobject.c
rev 1.20 of Lib/test/test_b2.py

Interestingly enough, 2.22 of rangeobject.c also adds three attributes
to the xrange object: start, stop, and step. That is clearly a new
feature, so should it be moved into 2.0.1? Otherwise, the fix must be
back-ported to 2.0.

I think it we need a policy decision here, which could probably take
one of three outcomes:
1. everybody with CVS commit access can decide to move patches from
   the mainline to the branch. That would mean I could move these
   patches, and Fredrik Lundh could install the sre code base as-is.

2. the author of the original patch can make that decision. That would
   mean that Fredrik Lundh can still install his code as-is, but I'd
   have to ask Fred's permission.

3. the bug release coordinator can make that decision. That means that
   Aahz must decide.

If it is 1 or 2, some guideline is probably needed as to what exactly
is suitable for inclusion into 2.0.1. Guido has requested "*pure*
bugfixes", which, to me, says

a) sre must be carefully reviewed change for change
b) the three attributes on xrange objects must not appear in 2.0.1

In any case, I'm in favour of a much more careful operation for a
bugfix release. That probably means not all bugs that have been fixed
already will be fixed in 2.0.1; I would not expect otherwise.

Regards,
Martin



From aahz at pobox.com  Thu Mar 15 23:21:12 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 14:21:12 -0800 (PST)
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 15, 2001 06:22:41 PM
Message-ID: <200103152221.RAA16060@panix3.panix.com>

> - In the checkin message, indicate which file version from the
>   mainline is being copied into the release branch.

Sounds good.

> - In Misc/NEWS, indicate what bugs have been fixed by installing these
>   patches. If it was a patch in response to a SF bug report, listing
>   the SF bug id should be sufficient; I've put some instructions into
>   Misc/NEWS on how to retrieve the bug report for a bug id.

Good, too.

> I've done so only for the _tkinter patch, which was both listed as
> critical, and which closed 2 SF bug reports. I've verified that the
> sre_parse patch also closes a number of SF bug reports, but have not
> copied it to the release branch.

I'm a little concerned that the 2.0 branch is being updated without a
2.0.1 target created, but it's quite possible my understanding of how
this should work is faulty.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From aahz at pobox.com  Thu Mar 15 23:34:26 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 14:34:26 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Skip Montanaro" at Mar 15, 2001 01:14:59 PM
Message-ID: <200103152234.RAA16951@panix3.panix.com>

>     aahz> Starting with Python 2.0, all feature releases are required to
>     aahz> have the form X.Y; patch releases will always be of the form
>     aahz> X.Y.Z.  To clarify the distinction between a bug fix release and a
>     aahz> patch release, all non-bug fix patch releases will have the suffix
>     aahz> "p" added.  For example, "2.1" is a feature release; "2.1.1" is a
>     aahz> bug fix release; and "2.1.2p" is a patch release that contains
>     aahz> minor feature enhancements.
> 
> I don't understand the need for (or fundamental difference between) bug fix
> and patch releases.  If 2.1 is the feature release and 2.1.1 is a bug fix
> release, is 2.1.2p a branch off of 2.1.2 or 2.1.1?

That's one of the issues that needs to be resolved if we permit both
patch releases and bug fix releases.  My preference would be that 2.1.2p
is a branch from 2.1.1.

>     aahz> The Patch Czar is the counterpart to the BDFL for patch releases.
>     aahz> However, the BDFL and designated appointees retain veto power over
>     aahz> individual patches and the decision of whether to label a patch
>     aahz> release as a bug fix release.
> 
> I propose that instead of (or in addition to) the Patch Czar you have a
> Release Shepherd (RS) for each feature release, presumably someone motivated
> to help maintain that particular release.  This person (almost certainly
> someone outside PythonLabs) would be responsible for the bug fix releases
> associated with a single feature release.  Your use of 2.1's sre as a "small
> feature change" for 2.0 and 1.5.2 is an example where having an RS for each
> feature release would be worthwhile.  Applying sre 2.1 to the 2.0 source
> would probably be reasonably easy.  Adding it to 1.5.2 would be much more
> difficult (no Unicode), and so would quite possibly be accepted by the 2.0
> RS and rejected by the 1.5.2 RS.

That may be a good idea.  Comments from others?  (Note that in the case
of sre, I was aware that Fredrik had already backported to both 2.0 and
1.5.2.)

> I suggest dumping the patch release concept and just going with bug fix
> releases.  The system will be complex enough without them.  If it proves
> desirable later, you can always add them.

Well, that was my original proposal before turning this into an official
PEP.  The stumbling block was the example of the case-sensitive import
patch (that permits Python's use on BeOS and MacOS X) for 2.1.  Both
Guido and Tim stated their belief that this was a "feature" and not a
"bug fix" (and I don't really disagree with them).  This leaves the
following options (assuming that backporting the import fix doesn't break
one of the Prohibitions):

* Change the minds of Guido/Tim to make the import issue a bugfix.

* Don't backport case-sensitive imports to 2.0.

* Permit minor feature additions/changes.

If we choose that last option, I believe a distinction should be drawn
between releases that contain only bugfixes and releases that contain a
bit more.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From thomas at xs4all.net  Thu Mar 15 23:37:37 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:37:37 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103150614.BAA04221@panix6.panix.com>; from aahz@panix.com on Thu, Mar 15, 2001 at 01:14:54AM -0500
References: <200103150614.BAA04221@panix6.panix.com>
Message-ID: <20010315233737.B29286@xs4all.nl>

On Thu, Mar 15, 2001 at 01:14:54AM -0500, aahz at panix.com wrote:
> [posted to c.l.py.announce and c.l.py; followups to c.l.py; cc'd to
> python-dev]

>     Patch releases are required to adhere to the following
>     restrictions:

>     1. There must be zero syntax changes.  All .pyc and .pyo files
>        must work (no regeneration needed) with all patch releases
>        forked off from a feature release.

Hmm... Would making 'continue' work inside 'try' count as a bugfix or as a
feature ? It's technically not a syntax change, but practically it is.
(Invalid syntax suddenly becomes valid.) 

>   Bug Fix Releases

>     Bug fix releases are a subset of all patch releases; it is
>     prohibited to add any features to the core in a bug fix release.
>     A patch release that is not a bug fix release may contain minor
>     feature enhancements, subject to the Prohibitions section.

I'm not for this 'bugfix release', 'patch release' difference. The
numbering/naming convention is too confusing, not clear enough, and I don't
see the added benifit of adding limited features. If people want features,
they should go and get a feature release. The most important bit in patch
('bugfix') releases is not to add more bugs, and rewriting parts of code to
fix a bug is something that is quite likely to insert more bugs. Sure, as
the patch coder, you are probably certain there are no bugs -- but so was
whoever added the bug in the first place :)

>     The Patch Czar decides when there are a sufficient number of
>     patches to warrant a release.  The release gets packaged up,
>     including a Windows installer, and made public as a beta release.
>     If any new bugs are found, they must be fixed and a new beta
>     release publicized.  Once a beta cycle completes with no new bugs
>     found, the package is sent to PythonLabs for certification and
>     publication on python.org.

>     Each beta cycle must last a minimum of one month.

This process probably needs a firm smack with reality, but that would have
to wait until it meets some, first :) Deciding when to do a bugfix release
is very tricky: some bugs warrant a quick release, but waiting to assemble
more is generally a good idea. The whole beta cycle and windows
installer/RPM/etc process is also a bottleneck. Will Tim do the Windows
Installer (or whoever does it for the regular releases) ? If he's building
the installer anyway, why can't he 'bless' the release right away ?

I'm also not sure if a beta cycle in a bugfix release is really necessary,
especially a month long one. Given that we have a feature release planned
each 6 months, and a feature release has generally 2 alphas and 2 betas,
plus sometimes a release candidate, plus the release itself, and a bugfix
release would have one or two betas too, and say that we do two betas in
those six months, that would make 10+ 'releases' of various form in those 6
months. Ain't no-one[*] going to check them out for a decent spin, they'll
just wait for the final version.

>     Should the first patch release following any feature release be
>     required to be a bug fix release?  (Aahz proposes "yes".)
>     Is it allowed to do multiple forks (e.g. is it permitted to have
>     both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)
>     Does it makes sense for a bug fix release to follow a patch
>     release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)

More reasons not to have separate featurebugfixreleasethingies and
bugfix-releases :)

>     What is the equivalent of python-dev for people who are
>     responsible for maintaining Python?  (Aahz proposes either
>     python-patch or python-maint, hosted at either python.org or
>     xs4all.net.)

It would probably never be hosted at .xs4all.net. We use the .net address
for network related stuff, and as a nice Personality Enhancer (read: IRC
dick extender) for employees. We'd be happy to host stuff, but I would
actually prefer to have it under a python.org or some other python-related
domainname. That forestalls python questions going to admin at xs4all.net :) A
small logo somewhere on the main page would be nice, but stuff like that
should be discussed if it's ever an option, not just because you like the
name 'XS4ALL' :-)

>     Does SourceForge make it possible to maintain both separate and
>     combined bug lists for multiple forks?  If not, how do we mark
>     bugs fixed in different forks?  (Simplest is to simply generate a
>     new bug for each fork that it gets fixed in, referring back to the
>     main bug number for details.)

We could make it a separate SF project, just for the sake of keeping
bugreports/fixes in the maintenance branch and the head branch apart. The
main Python project already has an unwieldy number of open bugreports and
patches.

I'm also for starting the maintenance branch right after the real release,
and start adding bugfixes to it right away, as soon as they show up. Keeping
up to date on bufixes to the head branch is then as 'simple' as watching
python-checkins. (Up until the fact a whole subsystem gets rewritten, that
is :) People should still be able to submit bugfixes for the maintenance
branch specifically.

And I'm still willing to be the patch monkey, though I don't think I'm the
only or the best candidate. I'll happily contribute regardless of who gets
the blame :)

[*] There, that better, Moshe ?
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Mar 15 23:44:21 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:44:21 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103152234.RAA16951@panix3.panix.com>; from aahz@pobox.com on Thu, Mar 15, 2001 at 02:34:26PM -0800
References: <no.id> <200103152234.RAA16951@panix3.panix.com>
Message-ID: <20010315234421.C29286@xs4all.nl>

On Thu, Mar 15, 2001 at 02:34:26PM -0800, Aahz Maruch wrote:

[ How to get case-insensitive import fixed in 2.0.x ]

> * Permit minor feature additions/changes.

> If we choose that last option, I believe a distinction should be drawn
> between releases that contain only bugfixes and releases that contain a
> bit more.

We could make the distinction in the release notes. It could be a
'PURE BUGFIX RELEASE' or a 'FEATURE FIX RELEASE'. Bugfix releases just fix
bugs, that is, wrong behaviour. feature fix releases fix misfeatures, like
the case insensitive import issues. The difference between the two should be
explained in the paragraph following the header, for *each* release. For
example,

This is a 		PURE BUGFIX RELEASE.
This means that it only fixes behaviour that was previously giving an error,
or providing obviously wrong results. Only code relying out the outcome of
obviously incorrect code can be affected.

and

This is a 		FEATURE FIX RELEASE
This means that the (unexpected) behaviour of one or more features was
changed. This is a low-impact change that is unlikely to affect anyone, but
it is theoretically possible. See below for a list of possible effects: 
[ list of mis-feature-fixes and their result. ]

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From greg at cosc.canterbury.ac.nz  Thu Mar 15 23:45:50 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 11:45:50 +1300 (NZDT)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB0B863.52DFB61C@tismer.com>
Message-ID: <200103152245.LAA05494@s454.cosc.canterbury.ac.nz>

> But most probably, it will run interpreters from time to time.
> These can be told to take the scheduling role on.

You'll have to expand on that. My understanding is that
all the uthreads would have to run in a single C-level
interpreter invocation which can never be allowed to
return. I don't see how different interpreters can be
made to "take on" this role. If that were possible,
there wouldn't be any problem in the first place.

> It does not matter on which interpreter level we are,
> we just can't switch to frames of other levels. But
> even leaving a frame chain, and re-entering later
> with a different stack level is no problem.

You'll have to expand on that, too. Those two sentences
sound contradictory to me.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From thomas at xs4all.net  Thu Mar 15 23:54:08 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 15 Mar 2001 23:54:08 +0100
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <200103152221.RAA16060@panix3.panix.com>; from aahz@pobox.com on Thu, Mar 15, 2001 at 02:21:12PM -0800
References: <no.id> <200103152221.RAA16060@panix3.panix.com>
Message-ID: <20010315235408.D29286@xs4all.nl>

On Thu, Mar 15, 2001 at 02:21:12PM -0800, Aahz Maruch wrote:

> I'm a little concerned that the 2.0 branch is being updated without a
> 2.0.1 target created, but it's quite possible my understanding of how
> this should work is faulty.

Probably (no offense intended) :) A maintenance branch was created together
with the release tag. A branch is a tag with an even number of dots. You can
either use cvs commit magic to commit a version to the branch, or you can
checkout a new tree or update a current tree with the branch-tag given in a
'-r' option. The tag then becomes sticky: if you run update again, it will
update against the branch files. If you commit, it will commit to the branch
files.

I keep the Mailman 2.0.x and 2.1 (head) branches in two different
directories, the 2.0-branch one checked out with:

cvs -d twouters at cvs.mailman.sourceforge.net:/cvsroot/mailman co -r \
Release_2_0_1-branch mailman; mv mailman mailman-2.0.x

It makes for very administration between releases. The one time I tried to
automatically import patches between two branches, I fucked up Mailman 2.0.2
and Barry had to release 2.0.3 less than a week later ;)

When you have a maintenance branch and you want to make a release in it, you
simply update your tree to the current state of that branch, and tag all the
files with tag (in Mailman) Release_2_0_3. You can then check out
specifically those files (and not changes that arrived later) and make a
tarball/windows install out of them.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From aahz at pobox.com  Fri Mar 16 00:17:29 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 15:17:29 -0800 (PST)
Subject: [Python-Dev] Re: Preparing 2.0.1
In-Reply-To: <20010315235408.D29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:54:08 PM
Message-ID: <200103152317.SAA04392@panix2.panix.com>

Thanks.  Martin already cleared it up for me in private e-mail.  This
kind of knowledge lack is why I shouldn't be the Patch Czar, at least
not initially.  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From greg at cosc.canterbury.ac.nz  Fri Mar 16 00:29:52 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:29:52 +1300 (NZDT)
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIENNJFAA.tim.one@home.com>
Message-ID: <200103152329.MAA05500@s454.cosc.canterbury.ac.nz>

Tim Peters <tim.one at home.com>:
> [Guido]
>> Using decimal floating point won't fly either,
> If you again mean "by default", also agreed.

But if it's *not* by default, it won't stop naive users
from getting tripped up.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From aahz at pobox.com  Fri Mar 16 00:44:05 2001
From: aahz at pobox.com (Aahz Maruch)
Date: Thu, 15 Mar 2001 15:44:05 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315234421.C29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:44:21 PM
Message-ID: <200103152344.SAA06969@panix2.panix.com>

Thomas Wouters:
>
> [ How to get case-insensitive import fixed in 2.0.x ]
> 
> Aahz:
>>
>> * Permit minor feature additions/changes.
>> 
>> If we choose that last option, I believe a distinction should be drawn
>> between releases that contain only bugfixes and releases that contain a
>> bit more.
> 
> We could make the distinction in the release notes. It could be a
> 'PURE BUGFIX RELEASE' or a 'FEATURE FIX RELEASE'. Bugfix releases just fix
> bugs, that is, wrong behaviour. feature fix releases fix misfeatures, like
> the case insensitive import issues. The difference between the two should be
> explained in the paragraph following the header, for *each* release. For
> example,

I shan't whine if BDFL vetoes it, but I think this info ought to be
encoded in the version number.  Other than that, it seems that we're
mostly quibbling over wording, and it doesn't matter much to me how we
do it; your suggestion is fine with me.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From greg at cosc.canterbury.ac.nz  Fri Mar 16 00:46:07 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:46:07 +1300 (NZDT)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103152234.RAA16951@panix3.panix.com>
Message-ID: <200103152346.MAA05504@s454.cosc.canterbury.ac.nz>

aahz at pobox.com (Aahz Maruch):

> My preference would be that 2.1.2p is a branch from 2.1.1.

That could be a rather confusing numbering system.

Also, once there has been a patch release, does that mean that
the previous sequence of bugfix-only releases is then closed off?

Even a minor feature addition has the potential to introduce
new bugs. Some people may not want to take even that small
risk, but still want to keep up with bug fixes, so there may
be a demand for a further bugfix release to 2.1.1 after
2.1.2p is released. How would such a release be numbered?

Seems to me that if you're going to have minor feature releases
at all, you need a four-level numbering system: W.X.Y.Z,
where Y is the minor feature release number and Z the bugfix
release number.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Fri Mar 16 00:48:31 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 16 Mar 2001 12:48:31 +1300 (NZDT)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315234421.C29286@xs4all.nl>
Message-ID: <200103152348.MAA05507@s454.cosc.canterbury.ac.nz>

Thomas Wouters <thomas at xs4all.net>:

> This means that the (unexpected) behaviour of one or more features was
> changed. This is a low-impact change that is unlikely to affect
> anyone

Ummm... if it's so unlikely to affect anything, is it really
worth making a special release for it?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim_one at email.msn.com  Fri Mar 16 02:34:52 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 15 Mar 2001 20:34:52 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103152329.MAA05500@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPJJFAA.tim_one@email.msn.com>

[Guido]
> Using decimal floating point won't fly either,

[Tim]
> If you again mean "by default", also agreed.

[Greg Ewing]
> But if it's *not* by default, it won't stop naive users
> from getting tripped up.

Naive users are tripped up by many things.  I want to stop them in *Python*
from stumbling over 1/3, not over 1./3 or 0.5.  Changing the meaning of the
latter won't fly, not at this stage in the language's life; if the language
were starting from scratch, sure, but it's not.

I have no idea why Guido is so determined that the *former* (1/3) yield
binary floating point too (as opposed to something saner, be it rationals or
decimal fp), but I'm still trying to provoke him into explaining that part
<0.5 wink>.

I believe users (both newbies and experts) would also benefit from an
explicit way to spell a saner alternative using a tagged fp notation.
Whatever that alternative may be, I want 1/3 (not 1./3. or 0.5 or 1e100) to
yield that too without futzing with tags.




From tim_one at email.msn.com  Fri Mar 16 03:25:41 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Thu, 15 Mar 2001 21:25:41 -0500
Subject: [Python-Dev] RE: [Python-numerics]Re: WYSIWYG decimal fractions
In-Reply-To: <200103151542.KAA09191@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEPKJFAA.tim_one@email.msn.com>

[Tim]
> I think you'd have a very hard time finding any pre-college
> level teacher who wants to teach binary fp.  Your ABC experience is
> consistent with that too.

[Guido]
> "Want to", no.  But whether they're teaching Java, C++, or Pascal,
> they have no choice: if they need 0.5, they'll need binary floating
> point, whether they explain it adequately or not.  Possibly they are
> all staying away from the decimal point completely, but I find that
> hard to believe.

Pascal is the only language there with any claim to newbie friendliness
(Stroustrup's essays notwithstanding).  Along with C, it grew up in the era
of mondo expensive mainframes with expensive binary floating-point hardware
(the CDC boxes Wirth used were designed by S. Cray, and like all such were
fast-fp-at-any-cost designs).

As the earlier Kahan quote said, the massive difference between then and now
is the "innocence" of a vastly larger computer audience.  A smaller
difference is that Pascal is effectively dead now.  C++ remains constrained
by compatibility with C, although any number of decimal class libraries are
available for it, and run as fast as C++ can make them run.  The BigDecimal
class has been standard in Java since 1.1, but, since it's Java, it's so
wordy to use that it's as tedious as everything else in Java for more than
occasional use.

OTOH, from Logo to DrScheme, with ABC and REXX in between, *some* builtin
alternative to binary fp is a feature of all languages I know of that aim not
to drive newbies insane.  "Well, its non-integer arithmetic is no worse than
C++'s" is no selling point for Python.

>>>  But other educators (e.g. Randy Pausch, and the folks who did
>>> VPython) strongly recommend this based on user observation, so
>>> there's hope.

>> Alice is a red herring!  What they wanted was for 1/2 *not* to
>> mean 0.  I've read the papers and dissertations too -- there was
>> no plea for binary fp in those, just that division not throw away
>> info.

> I never said otherwise.

OK, but then I don't know what it is you were saying.  Your sentence
preceding "... strongly recommend this ..." ended:

    this would mean an approximate, binary f.p. result for 1/3, and
    this does not seem to have the support of the educators ...

and I assumed the "this" in "Randy Paush, and ... VPython strongly recommend
this" also referred to "an approximate, binary f.p. result for 1/3".  Which
they did not strongly recommend.  So I'm lost as to what you're saying they
did strongly recommend.

Other people in this thread have said that 1./3. should give an exact
rational or a decimal fp result, but I have not.  I have said 1/3 should not
be 0, but there are at least 3 schemes on the table which deliver a non-zero
result for 1/3, only one of which is to deliver a binary fp result.

> It just boils down to binary fp as the only realistic choice.

For 1./3. and 0.67 I agree (for backward compatibility), but I've seen no
identifiable argument in favor of binary fp for 1/3.  Would Alice's users be
upset if that returned a rational or decimal fp value instead?  I'm tempted
to say "of course not", but I really haven't asked them <wink>.




From tim.one at home.com  Fri Mar 16 04:16:12 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 15 Mar 2001 22:16:12 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <3AB0EE66.37E6C633@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com>

[M.-A. Lemburg]
> Just out of curiosity: is there a usable decimal type implementation
> somewhere on the net which we could beat on ?

ftp://ftp.python.org/pub/python/
    contrib-09-Dec-1999/DataStructures/FixedPoint.py

It's more than two years old, and regularly mentioned on c.l.py.  From the
tail end of the module docstring:

"""
The following Python operators and functions accept FixedPoints in the
expected ways:

    binary + - * / % divmod
        with auto-coercion of other types to FixedPoint.
        + - % divmod  of FixedPoints are always exact.
        * / of FixedPoints may lose information to rounding, in
            which case the result is the infinitely precise answer
            rounded to the result's precision.
        divmod(x, y) returns (q, r) where q is a long equal to
            floor(x/y) as if x/y were computed to infinite precision,
            and r is a FixedPoint equal to x - q * y; no information
            is lost.  Note that q has the sign of y, and abs(r) < abs(y).
    unary -
    == != < > <= >=  cmp
    min  max
    float  int  long    (int and long truncate)
    abs
    str  repr
    hash
    use as dict keys
    use as boolean (e.g. "if some_FixedPoint:" -- true iff not zero)
"""

> I for one would be very interested in having a decimal type
> around (with fixed precision and scale),

FixedPoint is unbounded "to the left" of the point but maintains a fixed and
user-settable number of (decimal) digits "after the point".  You can easily
subclass it to complain about overflow, or whatever other damn-fool thing you
think is needed <wink>.

> since databases rely on these a lot and I would like to assure
> that passing database data through Python doesn't cause any data
> loss due to rounding issues.

Define your ideal API and maybe I can implement it someday.  My employer also
has use for this.  FixedPoint.py is better suited to computation than I/O,
though, since it uses Python longs internally, and conversion between
BCD-like formats and Python longs is expensive.

> If there aren't any such implementations yet, the site that Tim
> mentioned  looks like a good starting point for heading into this
> direction... e.g. for mx.Decimal ;-)
>
> 	http://www2.hursley.ibm.com/decimal/

FYI, note that Cowlishaw is moving away from REXX's "string of ASCII digits"
representation toward a variant of BCD encoding.





From barry at digicool.com  Fri Mar 16 04:31:08 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:31:08 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
References: <200103150614.BAA04221@panix6.panix.com>
	<20010315233737.B29286@xs4all.nl>
Message-ID: <15025.35068.826947.482650@anthem.wooz.org>

Three things to keep in mind, IMO.  First, people dislike too many
choices.  As the version numbering scheme and branches go up, the
confusion level rises (it's probably like for each dot or letter you
add to the version number, the number of people who understand which
one to grab goes down an order of magnitude. :).  I don't think it
makes any sense to do more than one branch from the main trunk, and
then do bug fix releases along that branch whenever and for as long as
it seems necessary.

Second, you probably do not need a beta cycle for patch releases.
Just do the 2.0.2 release and if you've royally hosed something (which
is unlikely but possible) turn around and do the 2.0.3 release <wink>
a.s.a.p.

Third, you might want to create a web page, maybe a wiki is perfect
for this, that contains the most important patches.  It needn't
contain everything that goes into a patch release, but it can if
that's not too much trouble.  A nice explanation for each fix would
allow a user who doesn't want to apply the whole patch or upgrade to
just apply the most critical bug fixes for their application.  This
can get more complicated as the dependencies b/w patches goes up, so
it may not be feasible for all patches, or for the entire lifetime of
the maintenance branch.

-Barry



From barry at digicool.com  Fri Mar 16 04:40:51 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:40:51 -0500
Subject: [Python-Dev] Preparing 2.0.1
References: <200103151722.f2FHMfx02202@mira.informatik.hu-berlin.de>
	<0ae001c0ad8a$8fd88e00$e000a8c0@thomasnotebook>
	<200103152132.f2FLWlE29312@mira.informatik.hu-berlin.de>
Message-ID: <15025.35651.57084.276629@anthem.wooz.org>

>>>>> "MvL" == Martin v Loewis <martin at loewis.home.cs.tu-berlin.de> writes:

    MvL> In any case, I'm in favour of a much more careful operation
    MvL> for a bugfix release. That probably means not all bugs that
    MvL> have been fixed already will be fixed in 2.0.1; I would not
    MvL> expect otherwise.

I agree.  I think each patch will require careful consideration by the
patch czar, and some will be difficult calls.  You're just not going
to "fix" everything in 2.0.1 that's fixed in 2.1.  Give it your best
shot and keep the overhead for making a new patch release low.  That
way, if you screw up or get a hue and cry for not including a patch
everyone else considers critical, you can make a new patch release
fairly soon thereafter.

-Barry



From barry at digicool.com  Fri Mar 16 04:57:40 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Thu, 15 Mar 2001 22:57:40 -0500
Subject: [Python-Dev] Re: Preparing 2.0.1
References: <no.id>
	<200103152221.RAA16060@panix3.panix.com>
	<20010315235408.D29286@xs4all.nl>
Message-ID: <15025.36660.87154.993275@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

Thanks for the explanation Thomas, that's exactly how I manage the
Mailman trees too.  A couple of notes.

    TW> I keep the Mailman 2.0.x and 2.1 (head) branches in two
    TW> different directories, the 2.0-branch one checked out with:

    TW> cvs -d twouters at cvs.mailman.sourceforge.net:/cvsroot/mailman
    TW> co -r \ Release_2_0_1-branch mailman; mv mailman mailman-2.0.x
----------------^^^^^^^^^^^^^^^^^^^^

If I had to do it over again, I would have called this the
Release_2_0-maint branch.  I think that makes more sense when you see
the Release_2_0_X tags along that branch.

This was really my first foray back into CVS branches after my last
disaster (the string-meths branch on Python).  Things are working much
better this time, so I guess I understand how to use them now...

...except that I hit a small problem with CVS.  When I was ready to
release a new patch release along the maintenance branch, I wasn't
able to coax CVS into giving me a log between two tags on the branch.
E.g. I tried:

    cvs log -rRelease_2_0_1 -rRelease_2_0_2

(I don't actually remember at the moment whether it's specified like
this or with a colon between the release tags, but that's immaterial).

The resulting log messages did not include any of the changes between
those two branches.  However a "cvs diff" between the two tags /did/
give me the proper output, as did a "cvs log" between the branch tag
and the end of the branch.

Could have been a temporary glitch in CVS or maybe I was dipping into
the happy airplane pills a little early.  I haven't tried it again
since.

took-me-about-three-hours-to-explain-this-to-jeremy-on-the-way-to-ipc9
    -but-the-happy-airplane-pills-were-definitely-partying-in-my
    -bloodstream-at-the-time-ly y'rs,

-Barry



From tim.one at home.com  Fri Mar 16 07:34:33 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 16 Mar 2001 01:34:33 -0500
Subject: [Python-Dev] Patch Manager Guidelines
In-Reply-To: <200103151644.LAA09360@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEAGJGAA.tim.one@home.com>

[Martin]
> I have to following specific questions: If a patch is accepted, should
> it be closed also? If so, how should the resolution change if it is
> also committed?

[Guido]
> A patch should only be closed after it has been committed; otherwise
> it's too easy to lose track of it.  So I guess the proper sequence is
>
> 1. accept; Resolution set to Accepted
>
> 2. commit; Status set to Closed
>
> I hope the owner of the sf-faq document can fix it.

Heh -- there is no such person.  Since I wrote that Appendix to begin with, I
checked in appropriate changes:  yes, status should be Open if and only if
something still needs to be done (even if that's only a commit); status
should be Closed or Deleted if and only if nothing more should ever be done.




From tim.one at home.com  Fri Mar 16 08:02:08 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 16 Mar 2001 02:02:08 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <200103151539.QAA01573@core.inf.ethz.ch>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>

[Samuele Pedroni]
> ...
> I was thinking about stuff like generators used everywhere,
> but that is maybe just uninformed panicing. They are the
> kind of stuff that make programmers addictive <wink>.

Jython is to CPython as Jcon is to Icon, and *every* expression in Icon is "a
generator".

    http://www.cs.arizona.edu/icon/jcon/

is the home page, and you can get a paper from there detailing the Jcon
implementation.  It wasn't hard, and it's harder in Jcon than it would be in
Jython because Icon generators are also tied into an ubiquitous backtracking
scheme ("goal-directed evaluation").

Does Jython have an explicit object akin to CPython's execution frame?  If
so, 96.3% of what's needed for generators is already there.

At the other end of the scale, Jcon implements Icon's co-expressions (akin to
coroutines) via Java threads.




From tismer at tismer.com  Fri Mar 16 11:37:30 2001
From: tismer at tismer.com (Christian Tismer)
Date: Fri, 16 Mar 2001 11:37:30 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103152245.LAA05494@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB1ECEA.CD0FFC51@tismer.com>

This is going to be a hard task.
Well, let me give it a try...

Greg Ewing wrote:
> 
> > But most probably, it will run interpreters from time to time.
> > These can be told to take the scheduling role on.
> 
> You'll have to expand on that. My understanding is that
> all the uthreads would have to run in a single C-level
> interpreter invocation which can never be allowed to
> return. I don't see how different interpreters can be
> made to "take on" this role. If that were possible,
> there wouldn't be any problem in the first place.
> 
> > It does not matter on which interpreter level we are,
> > we just can't switch to frames of other levels. But
> > even leaving a frame chain, and re-entering later
> > with a different stack level is no problem.
> 
> You'll have to expand on that, too. Those two sentences
> sound contradictory to me.

Hmm. I can't see the contradiction yet. Let me try to explain,
maybe everything becomes obvious.

A microthread is a chain of frames.
All microthreads are sitting "below" a scheduler,
which ties them all together to a common root.
So this is a little like a tree.

There is a single interpreter who does the scheduling
and the processing.
At any time, there is
- either one thread running, or
- the scheduler itself.

As long as this interpreter is running, scheduling takes place.
But against your assumption, this interpreter can of course
return. He leaves the uthread tree structure intact and jumps
out of the scheduler, back to the calling C function.
This is doable.

But then, all the frames of the uthread tree are in a defined
state, none is currently being executed, so none is locked.
We can now use any other interpreter instance that is
created and use it to restart the scheduling process.

Maybe this clarifies it:
We cannot mix different interpreter levels *at the same time*.
It is not possible to schedule from a nested interpreter,
sincce that one needs to be unwound before.
But stopping the interpreter is a perfect unwind, and we
can start again from anywhere.
Therefore, a call-back driven UI should be no problem.

Thanks for the good question, I did never competely
think it through before.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From nas at arctrix.com  Fri Mar 16 12:37:33 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 03:37:33 -0800
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 16, 2001 at 02:02:08AM -0500
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com>
Message-ID: <20010316033733.A9366@glacier.fnational.com>

On Fri, Mar 16, 2001 at 02:02:08AM -0500, Tim Peters wrote:
> Does Jython have an explicit object akin to CPython's execution frame?  If
> so, 96.3% of what's needed for generators is already there.

FWIW, I think I almost have generators working after making
fairly minor changes to frameobject.c and ceval.c.  The only
remaining problem is that ceval likes to nuke f_valuestack.  The
hairy WHY_* logic is making this hard to fix.  Based on all the
conditionals it looks like it would be similer to put this code
in the switch statement.  That would probably speed up the
interpreter to boot.  Am I missing something or should I give it
a try?

  Neil



From nas at arctrix.com  Fri Mar 16 12:43:46 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 03:43:46 -0800
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <20010316033733.A9366@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 16, 2001 at 03:37:33AM -0800
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com> <20010316033733.A9366@glacier.fnational.com>
Message-ID: <20010316034346.B9366@glacier.fnational.com>

On Fri, Mar 16, 2001 at 03:37:33AM -0800, Neil Schemenauer wrote:
> Based on all the conditionals it looks like it would be similer
> to put this code in the switch statement.

s/similer/simpler.  Its early and I have the flu, okay? :-)

  Neil



From moshez at zadka.site.co.il  Fri Mar 16 14:18:43 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 16 Mar 2001 15:18:43 +0200
Subject: [Python-Dev] [Very Long (11K)] Numeric PEPs, first public posts
Message-ID: <E14du7v-0004Xn-00@darjeeling>

After the brouhaha at IPC9, it was decided that while PEP-0228 should stay
as a possible goal, there should be more concrete PEPs suggesting specific
changes in Python numerical model, with implementation suggestions and
migration paths fleshed out. So, there are four new PEPs now, all proposing
changes to Python's numeric model. There are some connections between them,
but each is supposed to be accepted or rejected according to its own merits.

To facilitate discussion, I'm including copies of the PEPs concerned
(for reference purposes, these are PEPs 0237-0240, and the latest public
version is always in the Python CVS under non-dist/peps/ . A reasonably
up to date version is linked from http://python.sourceforge.net)

Please direct all future discussion to python-numerics at lists.sourceforge.net
This list has been especially set-up to discuss those subjects.

PEP: 237
Title: Unifying Long Integers and Integers
Version: $Revision: 1.2 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Python has both integers (machine word size integral) types, and
    long integers (unbounded integral) types.  When integers
    operations overflow the machine registers, they raise an error.
    This PEP proposes to do away with the distinction, and unify the
    types from the perspective of both the Python interpreter and the
    C API.


Rationale

    Having the machine word size exposed to the language hinders
    portability.  For examples Python source files and .pyc's are not
    portable because of this.  Many programs find a need to deal with
    larger numbers after the fact, and changing the algorithms later
    is not only bothersome, but hinders performance in the normal
    case.


Literals

    A trailing 'L' at the end of an integer literal will stop having
    any meaning, and will be eventually phased out.  This will be done
    using warnings when encountering such literals.  The warning will
    be off by default in Python 2.2, on for 12 months, which will
    probably mean Python 2.3 and 2.4, and then will no longer be
    supported.


Builtin Functions

    The function long() will call the function int(), issuing a
    warning.  The warning will be off in 2.2, and on for two revisions
    before removing the function.  A FAQ will be added to explain that
    a solutions for old modules are:

         long=int

    at the top of the module, or:

         import __builtin__
         __builtin__.long=int

    In site.py.


C API

    All PyLong_As* will call PyInt_As*.  If PyInt_As* does not exist,
    it will be added.  Similarly for PyLong_From*.  A similar path of
    warnings as for the Python builtins will be followed.


Overflows

    When an arithmetic operation on two numbers whose internal
    representation is as machine-level integers returns something
    whose internal representation is a bignum, a warning which is
    turned off by default will be issued.  This is only a debugging
    aid, and has no guaranteed semantics.


Implementation

    The PyInt type's slot for a C long will be turned into a 

        union {
            long i;
            struct {
                unsigned long length;
                digit digits[1];
            } bignum;
        };

    Only the n-1 lower bits of the long have any meaning; the top bit
    is always set.  This distinguishes the union.  All PyInt functions
    will check this bit before deciding which types of operations to
    use.


Jython Issues

    Jython will have a PyInt interface which is implemented by both
    from PyFixNum and PyBigNum.


Open Issues

    What to do about sys.maxint?

    What to do about PyInt_AS_LONG failures?

    What do do about %u, %o, %x formatting operators?

    How to warn about << not cutting integers?

    Should the overflow warning be on a portable maximum size?

    Will unification of types and classes help with a more straightforward
    implementations?


Copyright

    This document has been placed in the public domain.


PEP: 238
Title: Non-integer Division
Version: $Revision: 1.1 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Dividing integers currently returns the floor of the quantities.
    This behavior is known as integer division, and is similar to what
    C and FORTRAN do.  This has the useful property that all
    operations on integers return integers, but it does tend to put a
    hump in the learning curve when new programmers are surprised that

        1/2 == 0

    This proposal shows a way to change this while keeping backward
    compatibility issues in mind.


Rationale

    The behavior of integer division is a major stumbling block found
    in user testing of Python.  This manages to trip up new
    programmers regularly and even causes the experienced programmer
    to make the occasional mistake.  The workarounds, like explicitly
    coercing one of the operands to float or use a non-integer
    literal, are very non-intuitive and lower the readability of the
    program.


// Operator

    A `//' operator which will be introduced, which will call the
    nb_intdivide or __intdiv__ slots.  This operator will be
    implemented in all the Python numeric types, and will have the
    semantics of

        a // b == floor(a/b)

    Except that the type of a//b will be the type a and b will be
    coerced into.  Specifically, if a and b are of the same type, a//b
    will be of that type too.


Changing the Semantics of the / Operator

    The nb_divide slot on integers (and long integers, if these are a
    separate type, but see PEP 237[1]) will issue a warning when given
    integers a and b such that

        a % b != 0

    The warning will be off by default in the 2.2 release, and on by
    default for in the next Python release, and will stay in effect
    for 24 months.  The next Python release after 24 months, it will
    implement

        (a/b) * b = a (more or less)

    The type of a/b will be either a float or a rational, depending on
    other PEPs[2, 3].


__future__

    A special opcode, FUTURE_DIV will be added that does the
    equivalent of:

        if type(a) in (types.IntType, types.LongType):
           if type(b) in (types.IntType, types.LongType):
               if a % b != 0:
                    return float(a)/b
        return a/b

    (or rational(a)/b, depending on whether 0.5 is rational or float).

    If "from __future__ import non_integer_division" is present in the
    module, until the IntType nb_divide is changed, the "/" operator
    is compiled to FUTURE_DIV.


Open Issues

    Should the // operator be renamed to "div"?


References

    [1] PEP 237, Unifying Long Integers and Integers, Zadka,
        http://python.sourceforge.net/peps/pep-0237.html

    [2] PEP 239, Adding a Rational Type to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0239.html

    [3] PEP 240, Adding a Rational Literal to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0240.html


Copyright

    This document has been placed in the public domain.


PEP: 239
Title: Adding a Rational Type to Python
Version: $Revision: 1.1 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    Python has no numeric type with the semantics of an unboundedly
    precise rational number.  This proposal explains the semantics of
    such a type, and suggests builtin functions and literals to
    support such a type.  This PEP suggests no literals for rational
    numbers; that is left for another PEP[1].


Rationale

    While sometimes slower and more memory intensive (in general,
    unboundedly so) rational arithmetic captures more closely the
    mathematical ideal of numbers, and tends to have behavior which is
    less surprising to newbies.  Though many Python implementations of
    rational numbers have been written, none of these exist in the
    core, or are documented in any way.  This has made them much less
    accessible to people who are less Python-savvy.


RationalType

    There will be a new numeric type added called RationalType.  Its
    unary operators will do the obvious thing.  Binary operators will
    coerce integers and long integers to rationals, and rationals to
    floats and complexes.

    The following attributes will be supported: .numerator and
    .denominator.  The language definition will not define these other
    then that:

        r.denominator * r == r.numerator

    In particular, no guarantees are made regarding the GCD or the
    sign of the denominator, even though in the proposed
    implementation, the GCD is always 1 and the denominator is always
    positive.

    The method r.trim(max_denominator) will return the closest
    rational s to r such that abs(s.denominator) <= max_denominator.


The rational() Builtin

    This function will have the signature rational(n, d=1).  n and d
    must both be integers, long integers or rationals.  A guarantee is
    made that

        rational(n, d) * d == n


References

    [1] PEP 240, Adding a Rational Literal to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0240.html


Copyright

    This document has been placed in the public domain.


PEP: 240
Title: Adding a Rational Literal to Python
Version: $Revision: 1.1 $
Author: pep at zadka.site.co.il (Moshe Zadka)
Status: Draft
Type: Standards Track
Created: 11-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    A different PEP[1] suggests adding a builtin rational type to
    Python.  This PEP suggests changing the ddd.ddd float literal to a
    rational in Python, and modifying non-integer division to return
    it.


Rationale

    Rational numbers are useful, and are much harder to use without
    literals.  Making the "obvious" non-integer type one with more
    predictable semantics will surprise new programmers less then
    using floating point numbers.


Proposal

    Literals conforming to the regular expression '\d*.\d*' will be
    rational numbers.


Backwards Compatibility

    The only backwards compatible issue is the type of literals
    mentioned above.  The following migration is suggested:

    1. "from __future__ import rational_literals" will cause all such
       literals to be treated as rational numbers.

    2. Python 2.2 will have a warning, turned off by default, about
       such literals in the absence of a __future__ statement.  The
       warning message will contain information about the __future__
       statement, and indicate that to get floating point literals,
       they should be suffixed with "e0".

    3. Python 2.3 will have the warning turned on by default.  This
       warning will stay in place for 24 months, at which time the
       literals will be rationals and the warning will be removed.


References

    [1] PEP 239, Adding a Rational Type to Python, Zadka,
        http://python.sourceforge.net/peps/pep-0239.html


Copyright

    This document has been placed in the public domain.



From nas at arctrix.com  Fri Mar 16 14:54:48 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 05:54:48 -0800
Subject: [Python-Dev] Simple generator implementation
In-Reply-To: <20010316033733.A9366@glacier.fnational.com>; from nas@arctrix.com on Fri, Mar 16, 2001 at 03:37:33AM -0800
References: <200103151539.QAA01573@core.inf.ethz.ch> <LNBBLJKPBEHFEDALKOLCMEAIJGAA.tim.one@home.com> <20010316033733.A9366@glacier.fnational.com>
Message-ID: <20010316055448.A9591@glacier.fnational.com>

On Fri, Mar 16, 2001 at 03:37:33AM -0800, Neil Schemenauer wrote:
> ... it looks like it would be similer to put this code in the
> switch statement.

Um, no.  Bad idea.  Even if I could restructure the loop, try/finally
blocks mess everything up anyhow.

After searching through many megabytes of python-dev archives (grepmail
is my friend), I finally found the posts Tim was referring me to
(Subject: Generator details, Date: July 1999).  Guido and Tim already
had the answer for me.  Now:

    import sys

    def g():
        for n in range(10):
            suspend n, sys._getframe()
        return None, None

    n, frame = g()
    while frame:
        print n
        n, frame = frame.resume()

merrily prints 0 to 9 on stdout.  Whee!

  Neil



From aahz at panix.com  Fri Mar 16 17:51:54 2001
From: aahz at panix.com (aahz at panix.com)
Date: Fri, 16 Mar 2001 08:51:54 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 15, 2001 10:32:47 PM
Message-ID: <200103161651.LAA18978@panix2.panix.com>

> 2. the author of the original patch can make that decision. That would
>    mean that Fredrik Lundh can still install his code as-is, but I'd
>    have to ask Fred's permission.
> 
> 3. the bug release coordinator can make that decision. That means that
>    Aahz must decide.

I'm in favor of some combination of 2) and 3).
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From martin at loewis.home.cs.tu-berlin.de  Fri Mar 16 18:46:47 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 16 Mar 2001 18:46:47 +0100
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <200103161651.LAA18978@panix2.panix.com> (aahz@panix.com)
References: <200103161651.LAA18978@panix2.panix.com>
Message-ID: <200103161746.f2GHklZ00972@mira.informatik.hu-berlin.de>

> I'm in favor of some combination of 2) and 3).

So let's try this out: Is it ok to include the new fields on range
objects in 2.0.1?

Regards,
Martin




From mal at lemburg.com  Fri Mar 16 19:09:17 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 16 Mar 2001 19:09:17 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com>
Message-ID: <3AB256CD.AE35DDEC@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Just out of curiosity: is there a usable decimal type implementation
> > somewhere on the net which we could beat on ?
> 
> ftp://ftp.python.org/pub/python/
>     contrib-09-Dec-1999/DataStructures/FixedPoint.py

So my intuition wasn't wrong -- you had all this already implemented
years ago ;-)
 
> It's more than two years old, and regularly mentioned on c.l.py.  From the
> tail end of the module docstring:
> 
> """
> The following Python operators and functions accept FixedPoints in the
> expected ways:
> 
>     binary + - * / % divmod
>         with auto-coercion of other types to FixedPoint.
>         + - % divmod  of FixedPoints are always exact.
>         * / of FixedPoints may lose information to rounding, in
>             which case the result is the infinitely precise answer
>             rounded to the result's precision.
>         divmod(x, y) returns (q, r) where q is a long equal to
>             floor(x/y) as if x/y were computed to infinite precision,
>             and r is a FixedPoint equal to x - q * y; no information
>             is lost.  Note that q has the sign of y, and abs(r) < abs(y).
>     unary -
>     == != < > <= >=  cmp
>     min  max
>     float  int  long    (int and long truncate)
>     abs
>     str  repr
>     hash
>     use as dict keys
>     use as boolean (e.g. "if some_FixedPoint:" -- true iff not zero)
> """

Very impressive ! The code really show just how difficult it is
to get this done right (w/r to some definition of that term ;).

BTW, is the implementation ANSI/IEEE standards conform ?

> > I for one would be very interested in having a decimal type
> > around (with fixed precision and scale),
> 
> FixedPoint is unbounded "to the left" of the point but maintains a fixed and
> user-settable number of (decimal) digits "after the point".  You can easily
> subclass it to complain about overflow, or whatever other damn-fool thing you
> think is needed <wink>.

I'll probably leave that part to the database interface ;-) Since they
check for possible overlfows anyway, I think your model fits the
database world best.

Note that I will have to interface to database using the string
representation, so I might get away with adding scale and precision
parameters to a (new) asString() method.

> > since databases rely on these a lot and I would like to assure
> > that passing database data through Python doesn't cause any data
> > loss due to rounding issues.
> 
> Define your ideal API and maybe I can implement it someday.  My employer also
> has use for this.  FixedPoint.py is better suited to computation than I/O,
> though, since it uses Python longs internally, and conversion between
> BCD-like formats and Python longs is expensive.

See above: if string representations can be computed fast,
than the internal storage format is secondary.
 
> > If there aren't any such implementations yet, the site that Tim
> > mentioned  looks like a good starting point for heading into this
> > direction... e.g. for mx.Decimal ;-)
> >
> >       http://www2.hursley.ibm.com/decimal/
> 
> FYI, note that Cowlishaw is moving away from REXX's "string of ASCII digits"
> representation toward a variant of BCD encoding.

Hmm, ideal would be an Open Source C lib which could be used as
backend for the implementation... haven't found such a beast yet
and the IBM BigDecimal Java class doesn't really look attractive as
basis for a C++ reimplementation.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From aahz at panix.com  Fri Mar 16 19:29:29 2001
From: aahz at panix.com (aahz at panix.com)
Date: Fri, 16 Mar 2001 10:29:29 -0800 (PST)
Subject: [Python-Dev] Preparing 2.0.1
In-Reply-To: <no.id> from "Martin v. Loewis" at Mar 16, 2001 06:46:47 PM
Message-ID: <200103161829.NAA23971@panix6.panix.com>

> So let's try this out: Is it ok to include the new fields on range
> objects in 2.0.1?

My basic answer is "no".  This is complicated by the fact that the 2.22
patch on rangeobject.c *also* fixes the __contains__ bug [*].
Nevertheless, if I were the Patch Czar (and note the very, very
deliberate use of the subjunctive here), I'd probably tell whoever
wanted to fix the __contains__ bug to submit a new patch that does not
include the new xrange() attributes.


[*]  Whee!  I figured out how to browse CVS!  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From mal at lemburg.com  Fri Mar 16 21:29:59 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 16 Mar 2001 21:29:59 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCEEAAJGAA.tim.one@home.com> <3AB256CD.AE35DDEC@lemburg.com>
Message-ID: <3AB277C7.28FE9B9B@lemburg.com>

Looking around some more on the web, I found that the GNU MP (GMP)
lib has switched from being GPLed to LGPLed, meaning that it
can actually be used by non-GPLed code as long as the source code
for the GMP remains publically accessible.

Some background which probably motivated this move can be found 
here:

  http://www.ptf.com/ptf/products/UNIX/current/0264.0.html
  http://www-inst.eecs.berkeley.edu/~scheme/source/stk/Mp/fgmp-1.0b5/notes

Since the GMP offers arbitrary precision numbers and also has
a rational number implementation I wonder if we could use it
in Python to support fractions and arbitrary precision
floating points ?!

Here's pointer to what the GNU MP has to offer:

  http://www.math.columbia.edu/online/gmp.html

The existing mpz module only supports MP integers, but support
for the other two types should only be a matter of hard work
;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From gward at python.net  Fri Mar 16 23:34:23 2001
From: gward at python.net (Greg Ward)
Date: Fri, 16 Mar 2001 17:34:23 -0500
Subject: [Python-Dev] Media spotting
Message-ID: <20010316173423.A20849@cthulhu.gerg.ca>

No doubt the Vancouver crowd has already seen this by now, but the rest
of you probably haven't.  From *The Globe and Mail*, March 15 2001, page
T5:

"""
Targeting people who work with computers but aren't programmers -- such
as data analysts, software testers, and Web masters -- ActivePerl comes
with telephone support and developer tools such as an "editor."  This
feature highlights mistakes made in a user's work -- similar to the
squiggly line that appears under spelling mistakes in Word documents.
"""

A-ha! so *that's* what editors are for!

        Greg

PS. article online at

  http://news.globetechnology.com/servlet/GAMArticleHTMLTemplate?tf=globetechnology/TGAM/NewsFullStory.html&cf=globetechnology/tech-config-neutral&slug=TWCOME&date=20010315

Apart from the above paragraph, it's pretty low on howlers.

-- 
Greg Ward - programmer-at-big                           gward at python.net
http://starship.python.net/~gward/
If you and a friend are being chased by a lion, it is not necessary to
outrun the lion.  It is only necessary to outrun your friend.



From sanner at scripps.edu  Sat Mar 17 02:43:23 2001
From: sanner at scripps.edu (Michel Sanner)
Date: Fri, 16 Mar 2001 17:43:23 -0800
Subject: [Python-Dev] import question
Message-ID: <1010316174323.ZM10134@noah.scripps.edu>

Hi, I didn't get any response on help-python.org so I figured I try these lists


if I have the follwoing packages hierarchy

A/
	__init__.py
        B/
		__init__.py
		C.py


I can use:

>>> from A.B import C

but if I use:

>>> import A
>>> print A
<module 'A' from 'A/__init__.pyc'>
>>> from A import B
print B
<module 'A.B' from 'A/B/__init__.py'>
>>> from B import C
Traceback (innermost last):
  File "<stdin>", line 1, in ?
ImportError: No module named B

in order to get this to work I have to

>>> import sys
>>> sys.modules['B'] = B

Is that expected ?
In the documentation I read:

"from" module "import" identifier

so I expected "from B import C" to be legal since B is a module

I tried this with Python 1.5.2 and 2.0 on an sgi under IRIX6.5

Thanks for any help

-Michel

-- 

-----------------------------------------------------------------------

>>>>>>>>>> AREA CODE CHANGE <<<<<<<<< we are now 858 !!!!!!!

Michel F. Sanner Ph.D.                   The Scripps Research Institute
Assistant Professor			Department of Molecular Biology
					  10550 North Torrey Pines Road
Tel. (858) 784-2341				     La Jolla, CA 92037
Fax. (858) 784-2860
sanner at scripps.edu                        http://www.scripps.edu/sanner
-----------------------------------------------------------------------




From guido at digicool.com  Sat Mar 17 03:13:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 16 Mar 2001 21:13:14 -0500
Subject: [Python-Dev] Re: [Import-sig] import question
In-Reply-To: Your message of "Fri, 16 Mar 2001 17:43:23 PST."
             <1010316174323.ZM10134@noah.scripps.edu> 
References: <1010316174323.ZM10134@noah.scripps.edu> 
Message-ID: <200103170213.VAA13856@cj20424-a.reston1.va.home.com>

> if I have the follwoing packages hierarchy
> 
> A/
> 	__init__.py
>         B/
> 		__init__.py
> 		C.py
> 
> 
> I can use:
> 
> >>> from A.B import C
> 
> but if I use:
> 
> >>> import A
> >>> print A
> <module 'A' from 'A/__init__.pyc'>
> >>> from A import B
> print B
> <module 'A.B' from 'A/B/__init__.py'>
> >>> from B import C
> Traceback (innermost last):
>   File "<stdin>", line 1, in ?
> ImportError: No module named B
> 
> in order to get this to work I have to
> 
> >>> import sys
> >>> sys.modules['B'] = B
> 
> Is that expected ?
> In the documentation I read:
> 
> "from" module "import" identifier
> 
> so I expected "from B import C" to be legal since B is a module
> 
> I tried this with Python 1.5.2 and 2.0 on an sgi under IRIX6.5
> 
> Thanks for any help
> 
> -Michel

In "from X import Y", X is not a reference to a name in your
namespace, it is a module name.  The right thing is indeed to write
"from A.B import C".  There's no way to shorten this; what you did
(assigning sys.modules['B'] = B) is asking for trouble.

Sorry!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From palisade at SirDrinkalot.rm-f.net  Sat Mar 17 03:37:54 2001
From: palisade at SirDrinkalot.rm-f.net (Palisade)
Date: Fri, 16 Mar 2001 18:37:54 -0800
Subject: [Python-Dev] PEP dircache.py core modification
Message-ID: <20010316183754.A7151@SirDrinkalot.rm-f.net>

This is my first exposure to the Python language, and I have found many things
to my liking. I have also noticed some quirks which I regard as assumption
flaws on part of the interpreter. The one I am interested in at the moment is
the assumption that we should leave the . and .. directory entries out of the
directory listing returned by os.listdir().

I have read the PEP specification and have thereby prepared a PEP for your
perusal. I hope you agree with me that this is both a philosophical issue
based in tradition as well as a duplication of effort problem that can be
readily solved with regards to backwards compatibility.

Thank you.

I have attached the PEP to this message.

Sincerely,
Nelson Rush

"This most beautiful system [The Universe] could only proceed from the
dominion of an intelligent and powerful Being."
-- Sir Isaac Newton
-------------- next part --------------
PEP: 
Title: os.listdir Full Directory Listing
Version: 
Author: palisade at users.sourceforge.net (Nelson Rush)
Status: 
Type: 
Created: 16/3/2001
Post-History: 

Introduction

    This PEP explains the need for two missing elements in the list returned
    by the os.listdir function.



Proposal

    It is obvious that having os.listdir() return a list with . and .. is
    going to cause many existing programs to function incorrectly. One
    solution to this problem could be to create a new function os.listdirall()
    or os.ldir() which returns every file and directory including the . and ..
    directory entries. Another solution could be to overload os.listdir's
    parameters, but that would unnecessarily complicate things.



Key Differences with the Existing Protocol

    The existing os.listdir() leaves out both the . and .. directory entries
    which are a part of the directory listing as is every other file.



Examples

    import os
    dir = os.ldir('/')
    for i in dir:
        print i

    The output would become:

    .
    ..
    lost+found
    tmp
    usr
    var
    WinNT
    dev
    bin
    home
    mnt
    sbin
    boot
    root
    man
    lib
    cdrom
    proc
    etc
    info
    pub
    .bash_history
    service



Dissenting Opinion

    During a discussion on Efnet #python, an objection was made to the
    usefulness of this implementation. Namely, that it is little extra
    effort to just insert these two directory entries into the list.

    Example:

    os.listdir() + ['.','..']

    An argument can be made however that the inclusion of both . and ..
    meet the standard way of listing files within directories. It is on
    basis of this common method between languages of listing directories
    that this tradition should be maintained.

    It was also suggested that not having . and .. returned in the list
    by default is required to be able to perform such actions as `cp * dest`.

    However, programs like `ls` and `cp` list and copy files excluding
    any directory that begins with a period. Therefore there is no need
    to clip . and .. from the directory list by default. Since anything
    beginning with a period is considered to be hidden.



Reference Implementation

    The reference implementation of the new dircache.py core ldir function
    extends listdir's functionality as proposed.

    http://palisade.rm-f.net/dircache.py



Copyright

    This document has been placed in the Public Domain.

From guido at digicool.com  Sat Mar 17 03:42:29 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 16 Mar 2001 21:42:29 -0500
Subject: [Python-Dev] PEP dircache.py core modification
In-Reply-To: Your message of "Fri, 16 Mar 2001 18:37:54 PST."
             <20010316183754.A7151@SirDrinkalot.rm-f.net> 
References: <20010316183754.A7151@SirDrinkalot.rm-f.net> 
Message-ID: <200103170242.VAA14061@cj20424-a.reston1.va.home.com>

Sorry, I see no merit in your proposal [to add "." and ".." back into
the output of os.listdir()].  You are overlooking the fact that the os
module in Python is intended to be a *portable* interface to operating
system functionality.  The presence of "." and ".." in a directory
listing is not supported on all platforms, e.g. not on Macintosh.

Also, my experience with using os.listdir() way back, when it *did*
return "." and "..", was that *every* program using os.listdir() had
to be careful to filter out "." and "..".  It simply wasn't useful to
include these.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paul at prescod.net  Sat Mar 17 03:56:27 2001
From: paul at prescod.net (Paul Prescod)
Date: Fri, 16 Mar 2001 18:56:27 -0800
Subject: [Python-Dev] Sourceforge FAQ
Message-ID: <3AB2D25B.FA724414@prescod.net>

Who maintains this document?

http://python.sourceforge.net/sf-faq.html#p1

I have some suggestions.

 1. Put an email address for comments like this in it.
 2. In the section on generating diff's, put in the right options for a
context diff
 3. My SF FAQ isn't there: how do I generate a diff that has a new file
as part of it?

 Paul Prescod



From nas at arctrix.com  Sat Mar 17 03:59:22 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 16 Mar 2001 18:59:22 -0800
Subject: [Python-Dev] Simple generator implementation
Message-ID: <20010316185922.A11046@glacier.fnational.com>

Before I jump into the black whole of coroutines and
continuations, here's a patch to remember me by:

    http://arctrix.com/nas/python/generator1.diff

Bye bye.

  Neil



From tim.one at home.com  Sat Mar 17 06:40:49 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 00:40:49 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <3AB2D25B.FA724414@prescod.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>

[Paul Prescod]
> Who maintains this document?
>
> http://python.sourceforge.net/sf-faq.html#p1

Who maintains ceval.c?  Same deal:  anyone with time, commit access, and
something they want to change.

> I have some suggestions.
>
>  1. Put an email address for comments like this in it.

If you're volunteering, happy to put in *your* email address <wink>.

>  2. In the section on generating diff's, put in the right options for a
> context diff

The CVS source is

    python/nondist/sf-html/sf-faq.html

You also need to upload (scp) it to

    shell.sourceforge.net:/home/groups/python/htdocs/

after you've committed your changes.

>  3. My SF FAQ isn't there: how do I generate a diff that has a new file
> as part of it?

"diff -c" <wink -- but I couldn't make much sense of this question>.




From tim.one at home.com  Sat Mar 17 10:29:24 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 04:29:24 -0500
Subject: [Python-Dev] Re: WYSIWYG decimal fractions)
In-Reply-To: <3AB256CD.AE35DDEC@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEEHJGAA.tim.one@home.com>

[M.-A. Lemburg, on FixedPoint.py]
> ...
> Very impressive ! The code really show just how difficult it is
> to get this done right (w/r to some definition of that term ;).

Yes and no.  Here's the "no" part:  I can do code like this in my sleep, due
to decades of experience.  So code like that isn't difficult at all for the
right person (yes, it *is* difficult if you don't already have the background
for it!  it's learnable, though <wink>).

Here's the "yes" part:  I have no experience with database or commercial
non-scientific applications, while people who do seem to have no clue about
how to *specify* what they need.  When I was writing FixedPoint.py, I asked
and asked what kind of rounding rules people needed, and what kind of
precision-propagation rules.  I got a grand total of 0 *useful* replies.  In
that sense it seems a lot like getting Python threads to work under HP-UX:
lots of people can complain, but no two HP-UX users agree on what's needed to
fix it.

In the end (for me), it *appeared* that there simply weren't any explicable
rules:  that among users of 10 different commerical apps, there were 20
different undocumented and proprietary legacy schemes for doing decimal fixed
and floats.  I'm certain I could implement any of them via trivial variations
of the FixedPoint.py code, but I couldn't get a handle on what exactly they
were.

> BTW, is the implementation ANSI/IEEE standards conform ?

Sure, the source code strictly conforms to the ANSI character set <wink>.

Which standards specifically do you have in mind?  The decimal portions of
the COBOL and REXX standards are concerned with how decimal arithmetic
interacts with language-specific features, while the 854 standard is
concerned with decimal *floating* point (which the astute reader may have
guessed FixedPoint.py does not address).  So it doesn't conform to any of
those.  Rounding, when needed, is done in conformance with the *default*
"when rounding is needed, round via nearest-or-even as if the intermediate
result were known to infinite precision" 854 rules.  But I doubt that many
commercial implementations of decimal arithmetic use that rule.

My much fancier Rational package (which I never got around to making
available) supports 9 rounding modes directly, and can be user-extended to
any number of others.  I doubt any of the builtin ones are in much use either
(for example, the builtin "round away from 0" and "round to nearest, or
towards minus infinity in case of tie" aren't even useful to me <wink>).

Today I like Cowlishaw's "Standard Decimal Arithmetic Specification" at

    http://www2.hursley.ibm.com/decimal/decspec.html

but have no idea how close that is to commerical practice (OTOH, it's
compatible w/ REXX, and lots of database-heads love REXX).

> ...
> Note that I will have to interface to database using the string
> representation, so I might get away with adding scale and precision
> parameters to a (new) asString() method.

As some of the module comments hint, FixedPoint.py started life with more
string gimmicks.  I ripped them out, though, for the same reason we *should*
drop thread support on HP-UX <0.6 wink>:  no two emails I got agreed on what
was needed, and the requests were mutually incompatible.  So I left a clean
base class for people to subclass as desired.

On 23 Dec 1999, Jim Fulton again raised "Fixed-decimal types" on Python-Dev.
I was on vacation & out of touch at the time.  Guido has surely forgotten
that he replied

    I like the idea of using the dd.ddL notation for this.

and will deny it if he reads this <wink>.

There's a long discussion after that -- look it up!  I see that I got around
to replying on 30 Dec 1999-- a repetition of this thread, really! --and
posted (Python) kernels for more flexible precision-control and rounding
policies than FixedPoint.py provided.

As is customary in the Python world, the first post that presented actual
code killed the discussion <wink/sigh> -- 'twas never mentioned again.

>> FixedPoint.py is better suited to computation than I/O, though,
>> since it uses Python longs internally, and conversion between
>> BCD-like formats and Python longs is expensive.

> See above: if string representations can be computed fast,

They cannot.  That was the point.  String representations *are* "BCD-like" to
me, in that they separate out each decimal digit.  To suck the individual
decimal digits out of a Python long requires a division by 10 for each digit.
Since people in COBOL routinely work with 32-digit decimal numbers, that's 32
*multi-precision* divisions by 10.  S-l-o-w.  You can play tricks like
dividing by 1000 instead, then use table lookup to get three digits at a
crack, but the overall process remains quadratic-time in the number of
digits.

Converting from a string of decimal digits to a Python long is also quadratic
time, so using longs as an internal representation is expensive in both
directions.

It is by far the cheapest way to do *computations*, though.  So I meant what
I said in all respects.

> ...
> Hmm, ideal would be an Open Source C lib which could be used as
> backend for the implementation... haven't found such a beast yet
> and the IBM BigDecimal Java class doesn't really look attractive as
> basis for a C++ reimplementation.

It's easy to find GPL'ed code for decimal arithmetic (for example, pick up
the Regina REXX implementation linked to from the Cowlishaw page).  For that
matter, you could just clone Python's longint code and fiddle the base to a
power of 10 (mutatis mutandis), and stick an exponent ("scale factor") on it.
This is harder than it sounds, but quite doable.

then-again-if-god-had-wanted-us-to-use-base-10-he-wouldn't-have-
    given-us-2-fingers-ly y'rs  - tim




From aahz at panix.com  Sat Mar 17 17:35:17 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 17 Mar 2001 08:35:17 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010315233737.B29286@xs4all.nl> from "Thomas Wouters" at Mar 15, 2001 11:37:37 PM
Message-ID: <200103171635.LAA12321@panix2.panix.com>

>>     1. There must be zero syntax changes.  All .pyc and .pyo files
>>        must work (no regeneration needed) with all patch releases
>>        forked off from a feature release.
> 
> Hmm... Would making 'continue' work inside 'try' count as a bugfix or as a
> feature ? It's technically not a syntax change, but practically it is.
> (Invalid syntax suddenly becomes valid.) 

That's a good question.  The modifying sentence is the critical part:
would there be any change to the bytecodes generated?  Even if not, I'd
be inclined to reject it.

>>   Bug Fix Releases
>> 
>>     Bug fix releases are a subset of all patch releases; it is
>>     prohibited to add any features to the core in a bug fix release.
>>     A patch release that is not a bug fix release may contain minor
>>     feature enhancements, subject to the Prohibitions section.
> 
> I'm not for this 'bugfix release', 'patch release' difference. The
> numbering/naming convention is too confusing, not clear enough, and I don't
> see the added benifit of adding limited features. If people want features,
> they should go and get a feature release. The most important bit in patch
> ('bugfix') releases is not to add more bugs, and rewriting parts of code to
> fix a bug is something that is quite likely to insert more bugs. Sure, as
> the patch coder, you are probably certain there are no bugs -- but so was
> whoever added the bug in the first place :)

As I said earlier, the primary motivation for going this route was the
ambiguous issue of case-sensitive imports.  (Similar issues are likely
to crop up.)

>>     The Patch Czar decides when there are a sufficient number of
>>     patches to warrant a release.  The release gets packaged up,
>>     including a Windows installer, and made public as a beta release.
>>     If any new bugs are found, they must be fixed and a new beta
>>     release publicized.  Once a beta cycle completes with no new bugs
>>     found, the package is sent to PythonLabs for certification and
>>     publication on python.org.
> 
>>     Each beta cycle must last a minimum of one month.
> 
> This process probably needs a firm smack with reality, but that would have
> to wait until it meets some, first :) Deciding when to do a bugfix release
> is very tricky: some bugs warrant a quick release, but waiting to assemble
> more is generally a good idea. The whole beta cycle and windows
> installer/RPM/etc process is also a bottleneck. Will Tim do the Windows
> Installer (or whoever does it for the regular releases) ? If he's building
> the installer anyway, why can't he 'bless' the release right away ?

Remember that all bugfixes are available as patches off of SourceForge.
Anyone with a truly critical need is free to download the patch and
recompile.  Overall, I see patch releases as coinciding with feature
releases so that people can concentrate on doing the same kind of work
at the same time.

> I'm also not sure if a beta cycle in a bugfix release is really necessary,
> especially a month long one. Given that we have a feature release planned
> each 6 months, and a feature release has generally 2 alphas and 2 betas,
> plus sometimes a release candidate, plus the release itself, and a bugfix
> release would have one or two betas too, and say that we do two betas in
> those six months, that would make 10+ 'releases' of various form in those 6
> months. Ain't no-one[*] going to check them out for a decent spin, they'll
> just wait for the final version.

That's why I'm making the beta cycle artificially long (I'd even vote
for a two-month minimum).  It slows the release pace and -- given the
usually high quality of Python betas -- it encourages people to try them
out.  I believe that we *do* need patch betas for system testing.

>>     Should the first patch release following any feature release be
>>     required to be a bug fix release?  (Aahz proposes "yes".)
>>     Is it allowed to do multiple forks (e.g. is it permitted to have
>>     both 2.0.2 and 2.0.2p)?  (Aahz proposes "no".)
>>     Does it makes sense for a bug fix release to follow a patch
>>     release?  (E.g., 2.0.1, 2.0.2p, 2.0.3.)
> 
> More reasons not to have separate featurebugfixreleasethingies and
> bugfix-releases :)

Fair enough.

>>     What is the equivalent of python-dev for people who are
>>     responsible for maintaining Python?  (Aahz proposes either
>>     python-patch or python-maint, hosted at either python.org or
>>     xs4all.net.)
> 
> It would probably never be hosted at .xs4all.net. We use the .net address
> for network related stuff, and as a nice Personality Enhancer (read: IRC
> dick extender) for employees. We'd be happy to host stuff, but I would
> actually prefer to have it under a python.org or some other python-related
> domainname. That forestalls python questions going to admin at xs4all.net :) A
> small logo somewhere on the main page would be nice, but stuff like that
> should be discussed if it's ever an option, not just because you like the
> name 'XS4ALL' :-)

Okay, I didn't mean to imply that it would literally be @xs4all.net.

>>     Does SourceForge make it possible to maintain both separate and
>>     combined bug lists for multiple forks?  If not, how do we mark
>>     bugs fixed in different forks?  (Simplest is to simply generate a
>>     new bug for each fork that it gets fixed in, referring back to the
>>     main bug number for details.)
> 
> We could make it a separate SF project, just for the sake of keeping
> bugreports/fixes in the maintenance branch and the head branch apart. The
> main Python project already has an unwieldy number of open bugreports and
> patches.

That was one of my thoughts, but I'm not entitled to an opinion (I don't
have an informed opinion ;-).

> I'm also for starting the maintenance branch right after the real release,
> and start adding bugfixes to it right away, as soon as they show up. Keeping
> up to date on bufixes to the head branch is then as 'simple' as watching
> python-checkins. (Up until the fact a whole subsystem gets rewritten, that
> is :) People should still be able to submit bugfixes for the maintenance
> branch specifically.

That is *precisely* why my original proposal suggested that only the N-1
release get patch attention, to conserve effort.  It is also why I
suggested that patch releases get hooked to feature releases.

> And I'm still willing to be the patch monkey, though I don't think I'm the
> only or the best candidate. I'll happily contribute regardless of who gets
> the blame :)

If you're willing to do the work, I'd love it if you were the official
Patch Czar.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From ping at lfw.org  Sat Mar 17 23:00:22 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 17 Mar 2001 14:00:22 -0800 (PST)
Subject: [Python-Dev] Scoping (corner cases)
Message-ID: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>

Hey there.

What's going on here?

    Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> x = 1
    >>> class Foo:
    ...     print x
    ... 
    1
    >>> class Foo:  
    ...     print x
    ...     x = 1
    ... 
    1
    >>> class Foo:
    ...     print x
    ...     x = 2
    ...     print x
    ... 
    1
    2
    >>> x
    1

Can we come up with a consistent story on class scopes for 2.1?



-- ?!ng




From guido at digicool.com  Sat Mar 17 23:19:52 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 17 Mar 2001 17:19:52 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: Your message of "Sat, 17 Mar 2001 14:00:22 PST."
             <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org> 
References: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org> 
Message-ID: <200103172219.RAA16377@cj20424-a.reston1.va.home.com>

> What's going on here?
> 
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 1
>     >>> class Foo:
>     ...     print x
>     ... 
>     1
>     >>> class Foo:  
>     ...     print x
>     ...     x = 1
>     ... 
>     1
>     >>> class Foo:
>     ...     print x
>     ...     x = 2
>     ...     print x
>     ... 
>     1
>     2
>     >>> x
>     1
> 
> Can we come up with a consistent story on class scopes for 2.1?

They are consistent with all past versions of Python.

Class scopes don't work like function scopes -- they use LOAD_NAME and
STORE_NAME.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Sat Mar 17 03:16:23 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Fri, 16 Mar 2001 21:16:23 -0500 (EST)
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <200103172219.RAA16377@cj20424-a.reston1.va.home.com>
References: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
	<200103172219.RAA16377@cj20424-a.reston1.va.home.com>
Message-ID: <15026.51447.862936.753570@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

  >> Can we come up with a consistent story on class scopes for 2.1?

  GvR> They are consistent with all past versions of Python.

Phew!

  GvR> Class scopes don't work like function scopes -- they use
  GvR> LOAD_NAME and STORE_NAME.

Class scopes are also different because a block's free variables are
not resolved in enclosing class scopes.  We'll need to make sure the
doc says that class scopes and function scopes are different.

Jeremy




From tim.one at home.com  Sat Mar 17 23:31:08 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 17:31:08 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <Pine.LNX.4.10.10103171358590.897-100000@skuld.kingmanhall.org>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFLJGAA.tim.one@home.com>

[Ka-Ping Yee]
> What's going on here?
>
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43)
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 1
>     >>> class Foo:
>     ...     print x
>     ...
>     1
>     >>> class Foo:
>     ...     print x

IMO, this one should have yielded an UnboundLocalError at runtime.  "A class
definition is a code block", and has a local namespace that's supposed to
follow the namespace rules; since x is bound to on the next line, x should be
a local name within the class body.

>     ...     x = 1
>     ...
>     1
>     >>> class Foo:
>     ...     print x

Ditto.

>     ...     x = 2
>     ...     print x
>     ...
>     1
>     2
>     >>> x
>     1
>
> Can we come up with a consistent story on class scopes for 2.1?

The story is consistent but the implementation is flawed <wink>.  Please open
a bug report; I wouldn't consider it high priority, though, as this is
unusual stuff to do in a class definition.




From tim.one at home.com  Sat Mar 17 23:33:07 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 17:33:07 -0500
Subject: [Python-Dev] Scoping (corner cases)
In-Reply-To: <15026.51447.862936.753570@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEFMJGAA.tim.one@home.com>

[Guido]
> Class scopes don't work like function scopes -- they use
> LOAD_NAME and STORE_NAME.

[Jeremy]
> Class scopes are also different because a block's free variables are
> not resolved in enclosing class scopes.  We'll need to make sure the
> doc says that class scopes and function scopes are different.

Yup.  Since I'll never want to do stuff like this, I don't really care a heck
of a lot what it does; but it should be documented!

What does Jython do with these?




From thomas at xs4all.net  Sun Mar 18 00:01:09 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:01:09 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103171635.LAA12321@panix2.panix.com>; from aahz@panix.com on Sat, Mar 17, 2001 at 08:35:17AM -0800
References: <20010315233737.B29286@xs4all.nl> <200103171635.LAA12321@panix2.panix.com>
Message-ID: <20010318000109.M27808@xs4all.nl>

On Sat, Mar 17, 2001 at 08:35:17AM -0800, aahz at panix.com wrote:

> Remember that all bugfixes are available as patches off of SourceForge.

I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
true, it's very not true. A lot of the patches applied are either never
submitted to SF (because it's the 'obvious fix' by one of the commiters) or
are modified to some extent from thh SF patch proposed. (Often
formatting/code style, fairly frequently symbol renaming, and not too
infrequently changes in the logic for various reasons.)

> > ... that would make 10+ 'releases' of various form in those 6 months.
> > Ain't no-one[*] going to check them out for a decent spin, they'll just
> > wait for the final version.

> That's why I'm making the beta cycle artificially long (I'd even vote
> for a two-month minimum).  It slows the release pace and -- given the
> usually high quality of Python betas -- it encourages people to try them
> out.  I believe that we *do* need patch betas for system testing.

But having a patch release once every 6 months negates the whole purpose of
patch releases :) If you are in need of a bugfix, you don't want to wait
three months before a bugfix release beta with your specific bug fixed is
going to be released, and you don't want to wait two months more for the
release to become final. (Note: we're talking people who don't want to use
the next feature release beta or current CVS version, so they aren't likely
to try a bugfix release beta either.) Bugfix releases should come often-ish,
compared to feature releases. But maybe we can get the BDFL to slow the pace
of feature releases instead ? Is the 6-month speedway really appropriate if
we have a separate bugfix release track ?

> > I'm also for starting the maintenance branch right after the real release,
> > and start adding bugfixes to it right away, as soon as they show up. Keeping
> > up to date on bufixes to the head branch is then as 'simple' as watching
> > python-checkins. (Up until the fact a whole subsystem gets rewritten, that
> > is :) People should still be able to submit bugfixes for the maintenance
> > branch specifically.

> That is *precisely* why my original proposal suggested that only the N-1
> release get patch attention, to conserve effort.  It is also why I
> suggested that patch releases get hooked to feature releases.

There is no technical reason to do just N-1. You can branch of as often as
you want (in fact, branches never disappear, so if we were building 3.5 ten
years from now (and we would still be using CVS <wink GregS>) we could apply
a specific patch to the 2.0 maintenance branch and release 2.0.128, if need
be.)

Keeping too many maintenance branches active does bring the administrative
nightmare with it, of course. We can start with just N-1 and see where it
goes from there. If significant numbers of people are still using 2.0.5 when
2.2 comes out, we might have to reconsider.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Sun Mar 18 00:26:45 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:26:45 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>; from tim.one@home.com on Sat, Mar 17, 2001 at 12:40:49AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
Message-ID: <20010318002645.H29286@xs4all.nl>

On Sat, Mar 17, 2001 at 12:40:49AM -0500, Tim Peters wrote:

> >  3. My SF FAQ isn't there: how do I generate a diff that has a new file
> > as part of it?

> "diff -c" <wink -- but I couldn't make much sense of this question>.

What Paul means is that he's added a new file to his tree, and wants to send
in a patch that includes that file. Unfortunately, CVS can't do that :P You
have two choices:

- 'cvs add' the file, but don't commit. This is kinda lame since it requires
 commit access, and it creates the administrativia for the file already. I
 *think* that if you do this, only you can actually add the file (after the
 patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
 show the file (as all +'es, obviously) even though it will complain to
 stderr about its ignorance about that specific file.

- Don't use cvs diff. Use real diff instead. Something like this:

  mv your tree asside, (can just mv your 'src' dir to 'src.mypatch' or such)
  cvs update -d,
  make distclean in your old tree,
  diff -crN --exclude=CVS src src.mypatch > mypatch.diff

 Scan your diff for bogus files, delete the sections by hand or if there are
 too many of them, add more --exclude options to your diff. I usually use
 '--exclude=".#*"' as well, and I forget what else.  By the way, for those
 who don't know it yet, an easy way to scan the patch is using 'diffstat'.

Note that to *apply* a patch like that (one with a new file), you need a
reasonably up-to-date GNU 'patch'.

I haven't added all this to the SF FAQ because, uhm, well, I consider them
lame hacks. I've long suspected there was a better way to do this, but I
haven't found it or even heard rumours about it yet. We should probably add
it to the FAQ anyway (just the 2nd option, though.)

Of course, there is a third way: write your own diff >;> It's not that hard,
really :) 

diff -crN ....
*** <name of file>      Thu Jan  1 01:00:00 1970
--- <name of file>      <timestamp of file>
***************
*** 0 ****
--- 1,<number of lines in file> ----
<file, each line prefixed by '+ '>

You can just insert this chunk (with an Index: line and some fake RCS cruft,
if you want -- patch doesn't use it anyway, IIRC) somewhere in your patch
file.

A couple of weeks back, while on a 10-hour nighttime spree to fix all our
SSH clients and daemons to openssh 2.5 where possible and a handpatched ssh1
where necessary, I found myself unconciously writing diffs instead of
editing source and re-diffing the files, because I apparently thought it was
faster (it was, too.) Scarily enough, I got all the linenumbers and such
correct, and patch didn't whine about them at all ;)

I haven't added all this to the SF FAQ because, uhm, well, I consider them
lame hacks. I've long suspected there was a better way to do this, but I
haven't found it or even heard rumours about it yet.

Sign-o-the-nerdy-times-I-guess-ly y'rs ;)
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim.one at home.com  Sun Mar 18 00:49:22 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 18:49:22 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <20010318002645.H29286@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>

[Pual]
>>>  3. My SF FAQ isn't there: how do I generate a diff that has a new file
>>>     as part of it?

[TIm]
>> "diff -c" <wink -- but I couldn't make much sense of this question>.

[Thomas]
> What Paul means is that he's added a new file to his tree, and
> wants to send in a patch that includes that file.

Ya, I picked that up after Martin explained it.  Best I could make out was
that Paul had written his own SF FAQ document and wanted to know how to
generate a diff that incorporated it as "a new file" into the existing SF
FAQ.  But then I've been severely sleep-deprived most of the last week
<0.zzzz wink>.

> ...
> - Don't use cvs diff. Use real diff instead. Something like this:
>
>   mv your tree asside, (can just mv your 'src' dir to
>                         'src.mypatch' or such)
>   cvs update -d,
>   make distclean in your old tree,
>   diff -crN --exclude=CVS src src.mypatch > mypatch.diff
>
> Scan your diff for bogus files, delete the sections by hand or if
> there are too many of them, add more --exclude options to your diff. I
> usually use '--exclude=".#*"' as well, and I forget what else.  By the
> away, for those who don't know it yet, an easy way to scan the patch is
> using 'diffstat'.
>
> Note that to *apply* a patch like that (one with a new file), you need a
> reasonably up-to-date GNU 'patch'.
> ...

I'm always amused that Unix users never allow the limitations of their tools
to convince them to do something obvious instead.

on-windows-you-just-tell-tim-to-change-the-installer<wink>-ly y'rs  - tim




From thomas at xs4all.net  Sun Mar 18 00:58:40 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 00:58:40 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>; from tim.one@home.com on Sat, Mar 17, 2001 at 06:49:22PM -0500
References: <20010318002645.H29286@xs4all.nl> <LNBBLJKPBEHFEDALKOLCIEFNJGAA.tim.one@home.com>
Message-ID: <20010318005840.K29286@xs4all.nl>

On Sat, Mar 17, 2001 at 06:49:22PM -0500, Tim Peters wrote:

> I'm always amused that Unix users never allow the limitations of their tools
> to convince them to do something obvious instead.

What would be the obvious thing ? Just check it in ? :-)
Note that CVS's dinkytoy attitude did prompt several people to do the
obvious thing: they started to rewrite it from scratch. Greg Stein jumped in
with those people to help them out on the touch infrastructure decisions,
which is why one of my *other* posts that mentioned CVS did a <wink GregS>
;)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From tim.one at home.com  Sun Mar 18 01:17:06 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 19:17:06 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <20010318005840.K29286@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFOJGAA.tim.one@home.com>

[Thomas Wouters]
> What would be the obvious thing ? Just check it in ? :-)

No:  as my signoff line implied, switch to Windows and tell Tim to deal with
it.  Works for everyone except me <wink>!  I was just tweaking you.  For a
patch on SF, it should be enough to just attach the new files and leave a
comment saying where they belong.

> Note that CVS's dinkytoy attitude did prompt several people to do the
> obvious thing: they started to rewrite it from scratch. Greg Stein
> jumped in with those people to help them out on the touch infrastructure
> decisions, which is why one of my *other* posts that mentioned CVS did a
> <wink GregS>
> ;)

Yup, *that* I picked up.

BTW, I'm always amused that Unix users never allow the lateness of their
rewrite-from-scratch boondoggles to convince them to do something obvious
instead.

wondering-how-many-times-someone-will-bite-ly y'rs  - tim




From pedroni at inf.ethz.ch  Sun Mar 18 01:27:48 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 01:27:48 +0100
Subject: [Python-Dev] Scoping (corner cases)
References: <LNBBLJKPBEHFEDALKOLCAEFMJGAA.tim.one@home.com>
Message-ID: <3AB40104.8020109@inf.ethz.ch>

Hi.

Tim Peters wrote:

> [Guido]
> 
>> Class scopes don't work like function scopes -- they use
>> LOAD_NAME and STORE_NAME.
> 
> 
> [Jeremy]
> 
>> Class scopes are also different because a block's free variables are
>> not resolved in enclosing class scopes.  We'll need to make sure the
>> doc says that class scopes and function scopes are different.
> 
> 
> Yup.  Since I'll never want to do stuff like this, I don't really care a heck
> of a lot what it does; but it should be documented!
> 
> What does Jython do with these?

The  jython codebase (prior and post to my nested scopes changes) does 
exactly the same as python, in fact something
equivalent to LOAD_NAME and SET_NAME is used in class scopes.

regards




From pedroni at inf.ethz.ch  Sun Mar 18 02:17:47 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 02:17:47 +0100
Subject: [Python-Dev] Icon-style generators vs. jython
References: <LNBBLJKPBEHFEDALKOLCAEFLJGAA.tim.one@home.com>
Message-ID: <3AB40CBB.2050308@inf.ethz.ch>

>   

This is very prelimary, no time to read details, try things or look at 
Neil's impl.

As far as I have understood Icon generators are function with normal 
entry, exit points and multiple suspension points:
at a suspension point an eventual impl should save the cur frame status  
somehow inside the function obj together with the information
where the function should restart and then normally return a value or 
nothing.

In jython we have frames, and function are encapsulated in objects so 
the whole should be doable (with some effort), I expect that we can deal
with the multi-entry-points with a jvm switch bytecode. Entry code or 
function dispatch code should handle restarting (we already have
a code the manages frame creation and function dispatch on every python 
call).

There could be a problem with jythonc (the jython-to-java compiler) 
because it produces java source code and not directly bytecode,
because at source level AFAIK in java one cannot intermangle switches 
and other ctrl structures, so how to deal with multiple entry points.
(no goto ;)). We would have to rewrite it to produce bytecode directly.

What is expected behaviour wrt threads, should generators be reentrant 
(that mean that frame and restart info should be saved on a thread basis)
or are they somehow global active objects so if thread 1 call a 
generator that suspend then thread 2 will reenter it after the 
suspension point?

Freezing more than a frame is not directly possible in jython, frames 
are pushed and popped on java stack and function calls pass through
java calling mechanism. (I imagine you need a separate thread for doing 
that).

regards.




From tim.one at home.com  Sun Mar 18 02:36:40 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 17 Mar 2001 20:36:40 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation
Message-ID: <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com>

FYI, I pointed a correspondent to Neil's new generator patch (among other
things), and got this back.  Not being a Web Guy at heart, I don't have a
clue about XSLT (just enough to know that 4-letter acronyms are a webb
abomination <wink>).

Note:  in earlier correspondence, the generator idea didn't seem to "click"
until I called them "resumable functions" (as I often did in the past, but
fell out of the habit).  People new to the concept often pick that up
quicker, or even, as in this case, remember that they once rolled such a
thing by hand out of prior necessity.

Anyway, possibly food for thought if XSLT means something to you ...


-----Original Message-----
From: XXX
Sent: Saturday, March 17, 2001 8:09 PM
To: Tim Peters
Subject: Re: FW: [Python-Dev] Simple generator implementation


On Sat, 17 Mar 2001, Tim Peters wrote:
> It's been done at least three times by now, most recently yesterday(!):

Thanks for the pointer.  I've started to read some
of the material you pointed me to... generators
are indeed very interesting.  They are what is
needed for an efficient implementation of XSLT.
(I was part of an XSLT implementation team that had to
dream up essentially the same solution). This is
all very cool.  Glad to see that I'm just re-inventing
the wheel.  Let's get generators in Python!

;) XXX




From paulp at ActiveState.com  Sun Mar 18 02:50:39 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sat, 17 Mar 2001 17:50:39 -0800
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com>
Message-ID: <3AB4146E.62AE3299@ActiveState.com>

I would call what you need for an efficient XSLT implementation "lazy
lists." They are never infinite but you would rather not pre-compute
them in advance. Often you use only the first item. Iterators probably
would be a good implementation technique.
-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From nas at arctrix.com  Sun Mar 18 03:17:41 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Sat, 17 Mar 2001 18:17:41 -0800
Subject: [Python-Dev] Simple generators, round 2
Message-ID: <20010317181741.B12195@glacier.fnational.com>

I've got a different implementation.  There are no new keywords
and its simpler to wrap a high level interface around the low
interface.

    http://arctrix.com/nas/python/generator2.diff

What the patch does:

    Split the big for loop and switch statement out of eval_code2
    into PyEval_EvalFrame.

    Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
    WHY_RETURN except that the frame value stack and the block stack
    are not touched.  The frame is also marked resumable before
    returning (f_stackbottom != NULL).

    Add two new methods to frame objects, suspend and resume.
    suspend takes one argument which gets attached to the frame
    (f_suspendvalue).  This tells ceval to suspend as soon as control
    gets back to this frame.  resume, strangely enough, resumes a
    suspended frame.  Execution continues at the point it was
    suspended.  This is done by calling PyEval_EvalFrame on the frame
    object.

    Make frame_dealloc clean up the stack and decref f_suspendvalue
    if it exists.

There are probably still bugs and it slows down ceval too much
but otherwise things are looking good.  Here are some examples
(the're a little long and but illustrative).  Low level
interface, similar to my last example:

    # print 0 to 999
    import sys

    def g():
        for n in range(1000):
            f = sys._getframe()
            f.suspend((n, f))
        return None, None

    n, frame = g()
    while frame:
        print n
        n, frame = frame.resume()

Let's build something easier to use:

    # Generator.py
    import sys

    class Generator:
        def __init__(self):
            self.frame = sys._getframe(1)
            self.frame.suspend(self)
            
        def suspend(self, value):
            self.frame.suspend(value)

        def end(self):
            raise IndexError

        def __getitem__(self, i):
            # fake indices suck, need iterators
            return self.frame.resume()

Now let's try Guido's pi example now:

    # Prints out the frist 100 digits of pi
    from Generator import Generator

    def pi():
        g = Generator()
        k, a, b, a1, b1 = 2L, 4L, 1L, 12L, 4L
        while 1:
            # Next approximation
            p, q, k = k*k, 2L*k+1L, k+1L
            a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
            # Print common digits
            d, d1 = a/b, a1/b1
            while d == d1:
                g.suspend(int(d))
                a, a1 = 10L*(a%b), 10L*(a1%b1)
                d, d1 = a/b, a1/b1

    def test():
        pi_digits = pi()
        for i in range(100):
            print pi_digits[i],

    if __name__ == "__main__":
        test()

Some tree traversals:

    from types import TupleType
    from Generator import Generator

    # (A - B) + C * (E/F)
    expr = ("+", 
             ("-", "A", "B"),
             ("*", "C",
                  ("/", "E", "F")))
               
    def postorder(node):
        g = Generator()
        if isinstance(node, TupleType):
            value, left, right = node
            for child in postorder(left):
                g.suspend(child)
            for child in postorder(right):
                g.suspend(child)
            g.suspend(value)
        else:
            g.suspend(node)
        g.end()

    print "postorder:",
    for node in postorder(expr):
        print node,
    print

This prints:

    postorder: A B - C E F / * +

Cheers,

  Neil



From aahz at panix.com  Sun Mar 18 07:31:39 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sat, 17 Mar 2001 22:31:39 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <20010318000109.M27808@xs4all.nl> from "Thomas Wouters" at Mar 18, 2001 12:01:09 AM
Message-ID: <200103180631.BAA03321@panix3.panix.com>

>> Remember that all bugfixes are available as patches off of SourceForge.
> 
> I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
> true, it's very not true. A lot of the patches applied are either never
> submitted to SF (because it's the 'obvious fix' by one of the commiters) or
> are modified to some extent from thh SF patch proposed. (Often
> formatting/code style, fairly frequently symbol renaming, and not too
> infrequently changes in the logic for various reasons.)

I'm thinking one of us is confused.  CVS is hosted at SourceForge,
right?  People can download specific parts of Python from SF?  And we're
presuming there will be a specific fork that patches are checked in to?
So in what way is my statement not true?

>>> ... that would make 10+ 'releases' of various form in those 6 months.
>>> Ain't no-one[*] going to check them out for a decent spin, they'll just
>>> wait for the final version.
>> 
>> That's why I'm making the beta cycle artificially long (I'd even vote
>> for a two-month minimum).  It slows the release pace and -- given the
>> usually high quality of Python betas -- it encourages people to try them
>> out.  I believe that we *do* need patch betas for system testing.
> 
> But having a patch release once every 6 months negates the whole
> purpose of patch releases :) If you are in need of a bugfix, you
> don't want to wait three months before a bugfix release beta with
> your specific bug fixed is going to be released, and you don't want
> to wait two months more for the release to become final. (Note: we're
> talking people who don't want to use the next feature release beta or
> current CVS version, so they aren't likely to try a bugfix release
> beta either.) Bugfix releases should come often-ish, compared to
> feature releases. But maybe we can get the BDFL to slow the pace of
> feature releases instead ? Is the 6-month speedway really appropriate
> if we have a separate bugfix release track ?

Well, given that neither of us is arguing on the basis of actual
experience with Python patch releases, there's no way we can prove one
point of view as being better than the other.  Tell you what, though:
take the job of Patch Czar, and I'll follow your lead.  I'll just
reserve the right to say "I told you so".  ;-)

>>> I'm also for starting the maintenance branch right after the real release,
>>> and start adding bugfixes to it right away, as soon as they show up. Keeping
>>> up to date on bufixes to the head branch is then as 'simple' as watching
>>> python-checkins. (Up until the fact a whole subsystem gets rewritten, that
>>> is :) People should still be able to submit bugfixes for the maintenance
>>> branch specifically.
> 
>> That is *precisely* why my original proposal suggested that only the N-1
>> release get patch attention, to conserve effort.  It is also why I
>> suggested that patch releases get hooked to feature releases.
> 
> There is no technical reason to do just N-1. You can branch of as often as
> you want (in fact, branches never disappear, so if we were building 3.5 ten
> years from now (and we would still be using CVS <wink GregS>) we could apply
> a specific patch to the 2.0 maintenance branch and release 2.0.128, if need
> be.)

No technical reason, no.  It's just that N-1 is going to be similar
enough to N, particularly for any given bugfix, that it should be
"trivial" to keep the bugfixes in synch.  That's all.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From esr at snark.thyrsus.com  Sun Mar 18 07:46:28 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 18 Mar 2001 01:46:28 -0500
Subject: [Python-Dev] Followup on freezetools error
Message-ID: <200103180646.f2I6kSV16765@snark.thyrsus.com>

OK, so following Guido's advice I did a CVS update and reinstall and
then tried a freeze on the CML2 compiler.  Result:

Traceback (most recent call last):
  File "freezetools/freeze.py", line 460, in ?
    main()
  File "freezetools/freeze.py", line 321, in main
    mf.import_hook(mod)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 302, in scan_code
    self.scan_code(c, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 280, in scan_code
    self.import_hook(name, m)
  File "freezetools/modulefinder.py", line 106, in import_hook
    q, tail = self.find_head_package(parent, name)
  File "freezetools/modulefinder.py", line 147, in find_head_package
    q = self.import_module(head, qname, parent)
  File "freezetools/modulefinder.py", line 232, in import_module
    m = self.load_module(fqname, fp, pathname, stuff)
  File "freezetools/modulefinder.py", line 260, in load_module
    self.scan_code(co, m)
  File "freezetools/modulefinder.py", line 288, in scan_code
    assert lastname is not None
AssertionError
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

Question with boldness even the existence of a God; because, if there
be one, he must more approve the homage of reason, than that of
blindfolded fear.... Do not be frightened from this inquiry from any
fear of its consequences. If it ends in the belief that there is no
God, you will find incitements to virtue in the comfort and
pleasantness you feel in its exercise...
	-- Thomas Jefferson, in a 1787 letter to his nephew



From esr at snark.thyrsus.com  Sun Mar 18 08:06:08 2001
From: esr at snark.thyrsus.com (Eric S. Raymond)
Date: Sun, 18 Mar 2001 02:06:08 -0500
Subject: [Python-Dev] Re: Followup on freezetools error
Message-ID: <200103180706.f2I768q17436@snark.thyrsus.com>

Cancel previous complaint.  Pilot error.  I think I'm going to end up
writing some documentation for this puppy...
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

You know why there's a Second Amendment?  In case the government fails to
follow the first one.
         -- Rush Limbaugh, in a moment of unaccustomed profundity 17 Aug 1993



From pedroni at inf.ethz.ch  Sun Mar 18 13:01:40 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Sun, 18 Mar 2001 13:01:40 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com>
Message-ID: <001901c0afa3$322094e0$f979fea9@newmexico>

This kind of low level impl. where suspension points are known at runtime only,
cannot be implemented in jython
(at least not in a non costly and reasonable way).
Jython codebase is likely to just allow generators with suspension points known
at compilation time.

regards.

----- Original Message -----
From: Neil Schemenauer <nas at arctrix.com>
To: <python-dev at python.org>
Sent: Sunday, March 18, 2001 3:17 AM
Subject: [Python-Dev] Simple generators, round 2


> I've got a different implementation.  There are no new keywords
> and its simpler to wrap a high level interface around the low
> interface.
>
>     http://arctrix.com/nas/python/generator2.diff
>
> What the patch does:
>
>     Split the big for loop and switch statement out of eval_code2
>     into PyEval_EvalFrame.
>
>     Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
>     WHY_RETURN except that the frame value stack and the block stack
>     are not touched.  The frame is also marked resumable before
>     returning (f_stackbottom != NULL).
>
>     Add two new methods to frame objects, suspend and resume.
>     suspend takes one argument which gets attached to the frame
>     (f_suspendvalue).  This tells ceval to suspend as soon as control
>     gets back to this frame.  resume, strangely enough, resumes a
>     suspended frame.  Execution continues at the point it was
>     suspended.  This is done by calling PyEval_EvalFrame on the frame
>     object.
>
>     Make frame_dealloc clean up the stack and decref f_suspendvalue
>     if it exists.
>
> There are probably still bugs and it slows down ceval too much
> but otherwise things are looking good.  Here are some examples
> (the're a little long and but illustrative).  Low level
> interface, similar to my last example:
>
>     # print 0 to 999
>     import sys
>
>     def g():
>         for n in range(1000):
>             f = sys._getframe()
>             f.suspend((n, f))
>         return None, None
>
>     n, frame = g()
>     while frame:
>         print n
>         n, frame = frame.resume()
>
> Let's build something easier to use:
>
>     # Generator.py
>     import sys
>
>     class Generator:
>         def __init__(self):
>             self.frame = sys._getframe(1)
>             self.frame.suspend(self)
>
>         def suspend(self, value):
>             self.frame.suspend(value)
>
>         def end(self):
>             raise IndexError
>
>         def __getitem__(self, i):
>             # fake indices suck, need iterators
>             return self.frame.resume()
>
> Now let's try Guido's pi example now:
>
>     # Prints out the frist 100 digits of pi
>     from Generator import Generator
>
>     def pi():
>         g = Generator()
>         k, a, b, a1, b1 = 2L, 4L, 1L, 12L, 4L
>         while 1:
>             # Next approximation
>             p, q, k = k*k, 2L*k+1L, k+1L
>             a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
>             # Print common digits
>             d, d1 = a/b, a1/b1
>             while d == d1:
>                 g.suspend(int(d))
>                 a, a1 = 10L*(a%b), 10L*(a1%b1)
>                 d, d1 = a/b, a1/b1
>
>     def test():
>         pi_digits = pi()
>         for i in range(100):
>             print pi_digits[i],
>
>     if __name__ == "__main__":
>         test()
>
> Some tree traversals:
>
>     from types import TupleType
>     from Generator import Generator
>
>     # (A - B) + C * (E/F)
>     expr = ("+",
>              ("-", "A", "B"),
>              ("*", "C",
>                   ("/", "E", "F")))
>
>     def postorder(node):
>         g = Generator()
>         if isinstance(node, TupleType):
>             value, left, right = node
>             for child in postorder(left):
>                 g.suspend(child)
>             for child in postorder(right):
>                 g.suspend(child)
>             g.suspend(value)
>         else:
>             g.suspend(node)
>         g.end()
>
>     print "postorder:",
>     for node in postorder(expr):
>         print node,
>     print
>
> This prints:
>
>     postorder: A B - C E F / * +
>
> Cheers,
>
>   Neil
>
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev
>





From fdrake at acm.org  Sun Mar 18 15:23:23 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Sun, 18 Mar 2001 09:23:23 -0500 (EST)
Subject: [Python-Dev] Re: Followup on freezetools error
In-Reply-To: <200103180706.f2I768q17436@snark.thyrsus.com>
References: <200103180706.f2I768q17436@snark.thyrsus.com>
Message-ID: <15028.50395.414064.239096@cj42289-a.reston1.va.home.com>

Eric S. Raymond writes:
 > Cancel previous complaint.  Pilot error.  I think I'm going to end up
 > writing some documentation for this puppy...

Eric,
  So how often would you like reminders?  ;-)
  I think a "howto" format document would be great; I'm sure we could
find a place for it in the standard documentation.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From guido at digicool.com  Sun Mar 18 16:01:50 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 10:01:50 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: Your message of "Sun, 18 Mar 2001 00:26:45 +0100."
             <20010318002645.H29286@xs4all.nl> 
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>  
            <20010318002645.H29286@xs4all.nl> 
Message-ID: <200103181501.KAA22545@cj20424-a.reston1.va.home.com>

> What Paul means is that he's added a new file to his tree, and wants to send
> in a patch that includes that file. Unfortunately, CVS can't do that :P You
> have two choices:
> 
> - 'cvs add' the file, but don't commit. This is kinda lame since it requires
>  commit access, and it creates the administrativia for the file already. I
>  *think* that if you do this, only you can actually add the file (after the
>  patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
>  show the file (as all +'es, obviously) even though it will complain to
>  stderr about its ignorance about that specific file.

No, cvs diff still won't diff the file -- it says "new file".

> - Don't use cvs diff. Use real diff instead. Something like this:

Too much work to create a new tree.

What I do: I usually *know* what are the new files.  (If you don't,
consider getting a little more organized first :-).  Then do a regular
diff -c between /dev/null and each of the new files, and append that
to the CVS-generated diff.  Patch understands diffs between /dev/null
and a regular file and understands that this means to add the file.

(I have no idea what the rest of this thread is about.  Dinkytoy
attitude???  I played with tpy cars called dinky toys, but I don't see
the connection.  What SF FAQ are we talking about anyway?)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Sun Mar 18 17:22:38 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 11:22:38 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
	<LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
	<20010318002645.H29286@xs4all.nl>
Message-ID: <15028.57550.447075.226874@anthem.wooz.org>

>>>>> "TP" == Tim Peters <tim.one at home.com> writes:

    TP> I'm always amused that Unix users never allow the limitations
    TP> of their tools to convince them to do something obvious
    TP> instead.

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> - Don't use cvs diff. Use real diff instead. Something like
    TW> this:

    TW>   mv your tree asside, (can just mv your 'src' dir to
    TW> 'src.mypatch' or such) cvs update -d, make distclean in your
    TW> old tree, diff -crN --exclude=CVS src src.mypatch >
    TW> mypatch.diff

Why not try the "obvious" thing <wink>?

    % cvs diff -uN <rev-switches>

(Okay this also generates unified diffs, but I'm starting to find them
more readable than context diffs anyway.)

I seem to recall actually getting this to work effortlessly when I
generated the Mailman 2.0.3 patch (which contained the new file
README.secure_linux).

Yup, looking at the uploaded SF patch

    http://ftp1.sourceforge.net/mailman/mailman-2.0.2-2.0.3.diff

that file's in there, and diffed against /dev/null, so it's added by
`+' the whole file.

-Barry



From thomas at xs4all.net  Sun Mar 18 17:49:25 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 17:49:25 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <200103181501.KAA22545@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Mar 18, 2001 at 10:01:50AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <200103181501.KAA22545@cj20424-a.reston1.va.home.com>
Message-ID: <20010318174924.N27808@xs4all.nl>

On Sun, Mar 18, 2001 at 10:01:50AM -0500, Guido van Rossum wrote:
> > What Paul means is that he's added a new file to his tree, and wants to send
> > in a patch that includes that file. Unfortunately, CVS can't do that :P You
> > have two choices:
> > 
> > - 'cvs add' the file, but don't commit. This is kinda lame since it requires
> >  commit access, and it creates the administrativia for the file already. I
> >  *think* that if you do this, only you can actually add the file (after the
> >  patch is accepted ;) but I'm not sure. After the cvs add, a cvs diff -c will
> >  show the file (as all +'es, obviously) even though it will complain to
> >  stderr about its ignorance about that specific file.

> No, cvs diff still won't diff the file -- it says "new file".

Hm, you're right. I'm sure I had it working, but it doesn't work now. Odd. I
guess Barry got hit by the same oddity (see other reply to my msg ;)

> (I have no idea what the rest of this thread is about.  Dinkytoy
> attitude???  I played with tpy cars called dinky toys, but I don't see
> the connection.  What SF FAQ are we talking about anyway?)

The thread started by Paul asking why his question wasn't in the FAQ :) As
for 'dinkytoy attitude': it's a great, wonderful toy, but you can't use it
for real. A bit harsh, I guess, but I've been hitting the CVS constraints
many times in the last two weeks. (Moving files, moving directories,
removing directories 'for real', moving between different repositories in
which some files/directories (or just their names) overlap, making diffs
with new files in them ;) etc.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Sun Mar 18 17:53:25 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 11:53:25 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sat, 17 Mar 2001 22:31:39 PST."
             <200103180631.BAA03321@panix3.panix.com> 
References: <200103180631.BAA03321@panix3.panix.com> 
Message-ID: <200103181653.LAA22789@cj20424-a.reston1.va.home.com>

> >> Remember that all bugfixes are available as patches off of SourceForge.
> > 
> > I'm sorry, Aahz, but that is just plain not true. It's not a little bit not
> > true, it's very not true. A lot of the patches applied are either never
> > submitted to SF (because it's the 'obvious fix' by one of the commiters) or
> > are modified to some extent from thh SF patch proposed. (Often
> > formatting/code style, fairly frequently symbol renaming, and not too
> > infrequently changes in the logic for various reasons.)
> 
> I'm thinking one of us is confused.  CVS is hosted at SourceForge,
> right?  People can download specific parts of Python from SF?  And we're
> presuming there will be a specific fork that patches are checked in to?
> So in what way is my statement not true?

Ah...  Thomas clearly thought you meant the patch manager, and you
didn't make it too clear that's not what you meant.  Yes, they are of
course all available as diffs -- and notice how I use this fact in the
2.0 patches lists in the 2.0 wiki, e.g. on
http://www.python.org/cgi-bin/moinmoin/CriticalPatches.

> >>> ... that would make 10+ 'releases' of various form in those 6 months.
> >>> Ain't no-one[*] going to check them out for a decent spin, they'll just
> >>> wait for the final version.
> >> 
> >> That's why I'm making the beta cycle artificially long (I'd even vote
> >> for a two-month minimum).  It slows the release pace and -- given the
> >> usually high quality of Python betas -- it encourages people to try them
> >> out.  I believe that we *do* need patch betas for system testing.
> > 
> > But having a patch release once every 6 months negates the whole
> > purpose of patch releases :) If you are in need of a bugfix, you
> > don't want to wait three months before a bugfix release beta with
> > your specific bug fixed is going to be released, and you don't want
> > to wait two months more for the release to become final. (Note: we're
> > talking people who don't want to use the next feature release beta or
> > current CVS version, so they aren't likely to try a bugfix release
> > beta either.) Bugfix releases should come often-ish, compared to
> > feature releases. But maybe we can get the BDFL to slow the pace of
> > feature releases instead ? Is the 6-month speedway really appropriate
> > if we have a separate bugfix release track ?
> 
> Well, given that neither of us is arguing on the basis of actual
> experience with Python patch releases, there's no way we can prove one
> point of view as being better than the other.  Tell you what, though:
> take the job of Patch Czar, and I'll follow your lead.  I'll just
> reserve the right to say "I told you so".  ;-)

It seems I need to butt in here.  :-)

I like the model used by Tcl.  They have releases with a 6-12 month
release cycle, 8.0, 8.1, 8.2, 8.3, 8.4.  These have serious alpha and
beta cycles (three of each typically).  Once a release is out, the
issue occasional patch releases, e.g. 8.2.1, 8.2.2, 8.2.3; these are
about a month apart.  The latter bugfixes overlap with the early alpha
releases of the next major release.  I see no sign of beta cycles for
the patch releases.  The patch releases are *very* conservative in
what they add -- just bugfixes, about 5-15 per bugfix release.  They
seem to add the bugfixes to the patch branch as soon as they get them,
and they issue patch releases as soon as they can.

I like this model a lot.  Aahz, if you want to, you can consider this
a BDFL proclamation -- can you add this to your PEP?

> >>> I'm also for starting the maintenance branch right after the
> >>> real release, and start adding bugfixes to it right away, as
> >>> soon as they show up. Keeping up to date on bufixes to the head
> >>> branch is then as 'simple' as watching python-checkins. (Up
> >>> until the fact a whole subsystem gets rewritten, that is :)
> >>> People should still be able to submit bugfixes for the
> >>> maintenance branch specifically.
> > 
> >> That is *precisely* why my original proposal suggested that only
> >> the N-1 release get patch attention, to conserve effort.  It is
> >> also why I suggested that patch releases get hooked to feature
> >> releases.
> > 
> > There is no technical reason to do just N-1. You can branch of as
> > often as you want (in fact, branches never disappear, so if we
> > were building 3.5 ten years from now (and we would still be using
> > CVS <wink GregS>) we could apply a specific patch to the 2.0
> > maintenance branch and release 2.0.128, if need be.)
> 
> No technical reason, no.  It's just that N-1 is going to be similar
> enough to N, particularly for any given bugfix, that it should be
> "trivial" to keep the bugfixes in synch.  That's all.

I agree.  The Tcl folks never issue patch releases when they've issued
a new major release (in fact the patch releases seem to stop long
before they're ready to issue the next major release).  I realize that
we're way behind with 2.0.1 -- since this is the first time we're
doing this, that's OK for now, but in the future I like the Tcl
approach a lot!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Sun Mar 18 18:03:10 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 18:03:10 +0100
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <15028.57550.447075.226874@anthem.wooz.org>; from barry@digicool.com on Sun, Mar 18, 2001 at 11:22:38AM -0500
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <15028.57550.447075.226874@anthem.wooz.org>
Message-ID: <20010318180309.P27808@xs4all.nl>

On Sun, Mar 18, 2001 at 11:22:38AM -0500, Barry A. Warsaw wrote:

> Why not try the "obvious" thing <wink>?

>     % cvs diff -uN <rev-switches>

That certainly doesn't work. 'cvs' just gives a '? Filename' line for that
file, then. I just figured out why the 'cvs add <file>; cvs diff -cN' trick
worked before: it works with CVS 1.11 (which is what's in Debian unstable),
but not with CVS 1.10.8 (which is what's in RH7.) But you really have to use
'cvs add' before doing the diff. (So I'll take back *some* of the dinkytoy
comment ;)

> I seem to recall actually getting this to work effortlessly when I
> generated the Mailman 2.0.3 patch (which contained the new file
> README.secure_linux).

Ah, but you had already added and commited that file. Paul wants to do it to
submit a patch to SF, so checking it in to do that is probably not what he
meant. ;-P

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Sun Mar 18 18:07:18 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Sun, 18 Mar 2001 18:07:18 +0100
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <200103181653.LAA22789@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Sun, Mar 18, 2001 at 11:53:25AM -0500
References: <200103180631.BAA03321@panix3.panix.com> <200103181653.LAA22789@cj20424-a.reston1.va.home.com>
Message-ID: <20010318180718.Q27808@xs4all.nl>

On Sun, Mar 18, 2001 at 11:53:25AM -0500, Guido van Rossum wrote:

> I like the Tcl approach a lot!

Me, too. I didn't know they did it like that, but it makes sense to me :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From barry at digicool.com  Sun Mar 18 18:18:31 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 12:18:31 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
	<LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
	<20010318002645.H29286@xs4all.nl>
	<200103181501.KAA22545@cj20424-a.reston1.va.home.com>
	<20010318174924.N27808@xs4all.nl>
Message-ID: <15028.60903.326987.679071@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    TW> The thread started by Paul asking why his question wasn't in
    TW> the FAQ :) As for 'dinkytoy attitude': it's a great, wonderful
    TW> toy, but you can't use it for real. A bit harsh, I guess, but
    TW> I've been hitting the CVS constraints many times in the last
    TW> two weeks. (Moving files, moving directories, removing
    TW> directories 'for real', moving between different repositories
    TW> in which some files/directories (or just their names) overlap,
    TW> making diffs with new files in them ;) etc.)

Was it Greg Wilson who said at IPC8 that CVS was the worst tool that
everybody uses (or something like that)?

-Barry



From guido at digicool.com  Sun Mar 18 18:21:03 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 12:21:03 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: Your message of "Sun, 18 Mar 2001 17:49:25 +0100."
             <20010318174924.N27808@xs4all.nl> 
References: <3AB2D25B.FA724414@prescod.net> <LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com> <20010318002645.H29286@xs4all.nl> <200103181501.KAA22545@cj20424-a.reston1.va.home.com>  
            <20010318174924.N27808@xs4all.nl> 
Message-ID: <200103181721.MAA23196@cj20424-a.reston1.va.home.com>

> > No, cvs diff still won't diff the file -- it says "new file".
> 
> Hm, you're right. I'm sure I had it working, but it doesn't work now. Odd. I
> guess Barry got hit by the same oddity (see other reply to my msg ;)

Barry posted the right solution: cvs diff -c -N.  The -N option treats
absent files as empty.  I'll use this in the future!

> > (I have no idea what the rest of this thread is about.  Dinkytoy
> > attitude???  I played with tpy cars called dinky toys, but I don't see
> > the connection.  What SF FAQ are we talking about anyway?)
> 
> The thread started by Paul asking why his question wasn't in the FAQ :) As
> for 'dinkytoy attitude': it's a great, wonderful toy, but you can't use it
> for real. A bit harsh, I guess, but I've been hitting the CVS constraints
> many times in the last two weeks. (Moving files, moving directories,
> removing directories 'for real', moving between different repositories in
> which some files/directories (or just their names) overlap, making diffs
> with new files in them ;) etc.)

Note that at least *some* of the constraints have to do with issues
inherent in version control.  And cvs diff -N works. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Sun Mar 18 18:23:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 12:23:35 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sun, 18 Mar 2001 18:07:18 +0100."
             <20010318180718.Q27808@xs4all.nl> 
References: <200103180631.BAA03321@panix3.panix.com> <200103181653.LAA22789@cj20424-a.reston1.va.home.com>  
            <20010318180718.Q27808@xs4all.nl> 
Message-ID: <200103181723.MAA23240@cj20424-a.reston1.va.home.com>

[me]
> > I like the Tcl approach a lot!

[Thomas]
> Me, too. I didn't know they did it like that, but it makes sense to me :)

Ok, you are hereby nominated to be the 2.0.1 patch Czar.

(You saw that coming, right? :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From barry at digicool.com  Sun Mar 18 18:28:44 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sun, 18 Mar 2001 12:28:44 -0500
Subject: [Python-Dev] Sourceforge FAQ
References: <3AB2D25B.FA724414@prescod.net>
	<LNBBLJKPBEHFEDALKOLCCEEDJGAA.tim.one@home.com>
	<20010318002645.H29286@xs4all.nl>
	<15028.57550.447075.226874@anthem.wooz.org>
	<20010318180309.P27808@xs4all.nl>
Message-ID: <15028.61516.717449.55864@anthem.wooz.org>

>>>>> "TW" == Thomas Wouters <thomas at xs4all.net> writes:

    >> I seem to recall actually getting this to work effortlessly
    >> when I generated the Mailman 2.0.3 patch (which contained the
    >> new file README.secure_linux).

    TW> Ah, but you had already added and commited that file. Paul
    TW> wants to do it to submit a patch to SF, so checking it in to
    TW> do that is probably not what he meant. ;-P

Ah, you're right.  I'd missed Paul's original message.  Who am I to
argue that CVS doesn't suck? :)

-Barry



From paulp at ActiveState.com  Sun Mar 18 19:01:43 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 18 Mar 2001 10:01:43 -0800
Subject: [Python-Dev] Sourceforge FAQ
References: <LNBBLJKPBEHFEDALKOLCMEFOJGAA.tim.one@home.com>
Message-ID: <3AB4F807.4EAAD9FF@ActiveState.com>

Tim Peters wrote:
> 

> No:  as my signoff line implied, switch to Windows and tell Tim to deal with
> it.  Works for everyone except me <wink>!  I was just tweaking you.  For a
> patch on SF, it should be enough to just attach the new files and leave a
> comment saying where they belong.

Well, I'm going to bite just one more time. As near as I could see, a
patch on allows the submission of a single file. What I did to get
around this (seemed obvious at the time) was put the contents of the
file (because it was small) in the comment field and attach the "rest of
the patch."

Then I wanted to update the file but comments are added, not replace so
changes were quickly going to become nasty.

I'm just glad that the answer was sufficiently subtle that it generated
a new thread. I didn't miss anything obvious. :)
-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From martin at loewis.home.cs.tu-berlin.de  Sun Mar 18 19:39:48 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 18 Mar 2001 19:39:48 +0100
Subject: [Python-Dev] Sourceforge FAQ
Message-ID: <200103181839.f2IIdm101115@mira.informatik.hu-berlin.de>

> As near as I could see, a patch on allows the submission of a single
> file.

That was true with the old patch manager; the new tool can have
multiple artefacts per report. So I guess the proper procedure now is
to attach new files separately (or to build an archive of the new
files and to attach that separately). That requires no funny diffs
against /dev/null and works on VMS, ummh, Windows also.

Regards,
Martin



From aahz at panix.com  Sun Mar 18 20:42:30 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sun, 18 Mar 2001 11:42:30 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 18, 2001 11:53:25 AM
Message-ID: <200103181942.OAA08158@panix3.panix.com>

Guido:
>Aahz:
>>
>>    [to Thomas Wouters]
>>
>> I'm thinking one of us is confused.  CVS is hosted at SourceForge,
>> right?  People can download specific parts of Python from SF?  And we're
>> presuming there will be a specific fork that patches are checked in to?
>> So in what way is my statement not true?
> 
> Ah...  Thomas clearly thought you meant the patch manager, and you
> didn't make it too clear that's not what you meant.  Yes, they are of
> course all available as diffs -- and notice how I use this fact in the
> 2.0 patches lists in the 2.0 wiki, e.g. on
> http://www.python.org/cgi-bin/moinmoin/CriticalPatches.

Of course I didn't make it clear, because I have no clue what I'm
talking about.  ;-)  And actually, I was talking about simply
downloading complete replacements for specific Python source files.

But that seems to be irrelevent to our current path, so I'll shut up now.

>> Well, given that neither of us is arguing on the basis of actual
>> experience with Python patch releases, there's no way we can prove one
>> point of view as being better than the other.  Tell you what, though:
>> take the job of Patch Czar, and I'll follow your lead.  I'll just
>> reserve the right to say "I told you so".  ;-)
> 
> It seems I need to butt in here.  :-)
> 
> I like the model used by Tcl.  They have releases with a 6-12 month
> release cycle, 8.0, 8.1, 8.2, 8.3, 8.4.  These have serious alpha and
> beta cycles (three of each typically).  Once a release is out, the
> issue occasional patch releases, e.g. 8.2.1, 8.2.2, 8.2.3; these are
> about a month apart.  The latter bugfixes overlap with the early alpha
> releases of the next major release.  I see no sign of beta cycles for
> the patch releases.  The patch releases are *very* conservative in
> what they add -- just bugfixes, about 5-15 per bugfix release.  They
> seem to add the bugfixes to the patch branch as soon as they get them,
> and they issue patch releases as soon as they can.
> 
> I like this model a lot.  Aahz, if you want to, you can consider this
> a BDFL proclamation -- can you add this to your PEP?

BDFL proclamation received.  It'll take me a little while to rewrite
this into an internally consistent PEP.  It would be helpful if you
pre-announced (to c.l.py.announce) the official change in feature release
policy (the 6-12 month target instead of a 6 month target).

>>Thomas Wouters:
>>> There is no technical reason to do just N-1. You can branch of as
>>> often as you want (in fact, branches never disappear, so if we
>>> were building 3.5 ten years from now (and we would still be using
>>> CVS <wink GregS>) we could apply a specific patch to the 2.0
>>> maintenance branch and release 2.0.128, if need be.)
>> 
>> No technical reason, no.  It's just that N-1 is going to be similar
>> enough to N, particularly for any given bugfix, that it should be
>> "trivial" to keep the bugfixes in synch.  That's all.
> 
> I agree.  The Tcl folks never issue patch releases when they've issued
> a new major release (in fact the patch releases seem to stop long
> before they're ready to issue the next major release).  I realize that
> we're way behind with 2.0.1 -- since this is the first time we're
> doing this, that's OK for now, but in the future I like the Tcl
> approach a lot!

Okie-doke.
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From tim_one at email.msn.com  Sun Mar 18 20:49:17 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 18 Mar 2001 14:49:17 -0500
Subject: [Python-Dev] Sourceforge FAQ
In-Reply-To: <3AB4F807.4EAAD9FF@ActiveState.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHOJGAA.tim_one@email.msn.com>

[Paul Prescod]
> Well, I'm going to bite just one more time. As near as I could see, a
> patch on allows the submission of a single file.

That *used* to be true.  Tons of stuff changed on SF recently, including the
ability to attach as many files to patches as you need.  Also to bug reports,
which previously didn't allow any file attachments.  These are all instances
of a Tracker now.  "Feature Requests" is a new Tracker.




From guido at digicool.com  Sun Mar 18 20:58:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 18 Mar 2001 14:58:19 -0500
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: Your message of "Sun, 18 Mar 2001 11:42:30 PST."
             <200103181942.OAA08158@panix3.panix.com> 
References: <200103181942.OAA08158@panix3.panix.com> 
Message-ID: <200103181958.OAA23418@cj20424-a.reston1.va.home.com>

> > I like this model a lot.  Aahz, if you want to, you can consider this
> > a BDFL proclamation -- can you add this to your PEP?
> 
> BDFL proclamation received.  It'll take me a little while to rewrite
> this into an internally consistent PEP.  It would be helpful if you
> pre-announced (to c.l.py.announce) the official change in feature release
> policy (the 6-12 month target instead of a 6 month target).

You're reading too much in it. :-)

I don't want to commit to a precise release interval anyway -- no two
releases are the same.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Sun Mar 18 21:12:57 2001
From: aahz at panix.com (aahz at panix.com)
Date: Sun, 18 Mar 2001 12:12:57 -0800 (PST)
Subject: [Python-Dev] PEP 6: Patch and Bug Fix Releases
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 18, 2001 02:58:19 PM
Message-ID: <200103182012.PAA04074@panix2.panix.com>

>> BDFL proclamation received.  It'll take me a little while to rewrite
>> this into an internally consistent PEP.  It would be helpful if you
>> pre-announced (to c.l.py.announce) the official change in feature release
>> policy (the 6-12 month target instead of a 6 month target).
> 
> You're reading too much in it. :-)

Mmmmm....  Probably.

> I don't want to commit to a precise release interval anyway -- no two
> releases are the same.

That's very good to hear.  Perhaps I'm alone in this perception, but it
has sounded to me as though there's a goal (if not a "precise" interval)
of a release every six months.  Here's a quote from you on c.l.py:

"Given our current pace of releases that should be about 6 months warning."

With your current posting frequency to c.l.py, such oracular statements
have some of the force of a Proclamation.  ;-)
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

Three sins: BJ, B&J, B&J



From paulp at ActiveState.com  Sun Mar 18 22:12:45 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 18 Mar 2001 13:12:45 -0800
Subject: [Python-Dev] Sourceforge FAQ
References: <200103181839.f2IIdm101115@mira.informatik.hu-berlin.de>
Message-ID: <3AB524CD.67A0DEEA@ActiveState.com>

"Martin v. Loewis" wrote:
> 
> > As near as I could see, a patch on allows the submission of a single
> > file.
> 
> That was true with the old patch manager; the new tool can have
> multiple artefacts per report. 

The user interface really does not indicate that multiple files may be
attached. Do I just keep going back into the patch page, adding files?

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From guido at python.org  Sun Mar 18 23:43:27 2001
From: guido at python.org (Guido van Rossum)
Date: Sun, 18 Mar 2001 17:43:27 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
Message-ID: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>

[On c.l.py]
"Aahz Maruch" <aahz at panix.com> wrote in message
news:992tb4$qf5$1 at panix2.panix.com...
> [cc'd to Barry Warsaw in case he wants to comment]

(I happen to be skimming c.l.py this lazy Sunday afternoon :-)

> In article <3ab4f320 at nntp.server.uni-frankfurt.de>,
> Michael 'Mickey' Lauer  <mickey at Vanille.de> wrote:
> >
> >Hi. If I remember correctly PEP224 (the famous "attribute docstrings")
> >has only been postponed because Python 2.0 was in feature freeze
> >in August 2000. Will it be in 2.1 ? If not, what's the reason ? What
> >is needed for it to be included in 2.1 ?
>
> I believe it has been essentially superseded by PEP 232; I thought
> function attributes were going to be in 2.1, but I don't see any clear
> indication.

Actually, the attribute docstrings PEP is about a syntax for giving
non-function objects a docstring.  That's quite different than the function
attributes PEP.

The attribute docstring PEP didn't get in (and is unlikely to get in in its
current form) because I don't like the syntax much, *and* because the way to
look up the docstrings is weird and ugly: you'd have to use something like
instance.spam__doc__ or instance.__doc__spam (I forget which; they're both
weird and ugly).

I also expect that the doc-sig will be using the same syntax (string
literals in non-docstring positions) for a different purpose.  So I see
little chance for PEP 224.  Maybe I should just pronounce on this, and
declare the PEP rejected.

Unless Ping thinks this would be a really cool feature to be added to pydoc?
(Ping's going to change pydoc from importing the target module to scanning
its surce, I believe -- then he could add this feature without changing the
Python parser. :-)

--Guido van Rossum







From tim_one at email.msn.com  Sun Mar 18 23:48:38 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Sun, 18 Mar 2001 17:48:38 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <3AB277C7.28FE9B9B@lemburg.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>

[M.-A. Lemburg]
> Looking around some more on the web, I found that the GNU MP (GMP)
> lib has switched from being GPLed to LGPLed,

Right.

> meaning that it can actually be used by non-GPLed code as long as
> the source code for the GMP remains publically accessible.

Ask Stallman <0.9 wink>.

> ...
> Since the GMP offers arbitrary precision numbers and also has
> a rational number implementation I wonder if we could use it
> in Python to support fractions and arbitrary precision
> floating points ?!

Note that Alex Martelli runs the "General Multiprecision Python" project on
SourceForge:

    http://gmpy.sourceforge.net/

He had a severe need for fast rational arithmetic in his Python programs, so
starting wrapping the full GMP out of necessity.  I'm sorry to say that I
haven't had time to even download his code.

WRT floating point, GMP supports arbitrary-precision floats too, but not in a
way you're going to like:  they're binary floats, and do not deliver
platform-independent results.  That last point is subtle, because the docs
say:

    The precision of a calculation is defined as follows:  Compute the
    requested operation exactly (with "infinite precision"), and truncate
    the result to the destination variable precision.

Leaving aside that truncation is a bad idea, that *sounds*
platform-independent.  The trap is that GMP supports no way to specify the
precision of a float result exactly:  you can ask for any precision you like,
but the implementation reserves the right to *use* any precision whatsoever
that's at least as large as what you asked for.  And, in practice, they do
use more than you asked for, depending on the word size of the machine.  This
is in line with GMP's overriding goal of being fast, rather than consistent
or elegant.

GMP's int and rational facilities could be used to build platform-independent
decimal fp, though.  However, this doesn't get away from the string<->float
issues I covered before:  if you're going to use binary ints internally (and
GMP does), decimal_string<->decimal_float is quadratic time in both
directions.

Note too that GMP is a lot of code, and difficult to port due to its "speed
first" goals.  Making Python *rely* on it is thus dubious (GMP on a Palm
Pilot?  maybe ...).

> Here's pointer to what the GNU MP has to offer:
>
>   http://www.math.columbia.edu/online/gmp.html

The official home page (according to Torbj?rn Granlund, GMP's dad) is

    http://www.swox.com/gmp/

> The existing mpz module only supports MP integers, but support
> for the other two types should only be a matter of hard work ;-).

Which Alex already did.  Now what <wink>?




From aleaxit at yahoo.com  Mon Mar 19 00:26:23 2001
From: aleaxit at yahoo.com (Alex Martelli)
Date: Mon, 19 Mar 2001 00:26:23 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>
Message-ID: <08e801c0b003$824f4f00$0300a8c0@arthur>

"Tim Peters" <tim_one at email.msn.com> writes:

> Note that Alex Martelli runs the "General Multiprecision Python" project
on
> SourceForge:
>
>     http://gmpy.sourceforge.net/
>
> He had a severe need for fast rational arithmetic in his Python programs,
so
> starting wrapping the full GMP out of necessity.  I'm sorry to say that I
> haven't had time to even download his code.

...and as for me, I haven't gotten around to prettying it up for beta
release yet (mostly the docs -- still just a plain textfile) as it's doing
what I need... but, I _will_ get a round tuit...


> WRT floating point, GMP supports arbitrary-precision floats too, but not
in a
> way you're going to like:  they're binary floats, and do not deliver
> platform-independent results.  That last point is subtle, because the docs
> say:
>
>     The precision of a calculation is defined as follows:  Compute the
>     requested operation exactly (with "infinite precision"), and truncate
>     the result to the destination variable precision.
>
> Leaving aside that truncation is a bad idea, that *sounds*
> platform-independent.  The trap is that GMP supports no way to specify the
> precision of a float result exactly:  you can ask for any precision you
like,

There's another free library that interoperates with GMP to remedy
this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
It's also LGPL.  I haven't looked much into it as it seems it's not been
ported to Windows yet (and that looks like quite a project) which is
the platform I'm currently using (and, rationals do what I need:-).

> > The existing mpz module only supports MP integers, but support
> > for the other two types should only be a matter of hard work ;-).
>
> Which Alex already did.  Now what <wink>?

Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
MPFR Python wrapper interoperating with GMPY, btw -- it lives at
http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
I can't run MPFR myself, as above explained).


Alex



_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com




From mal at lemburg.com  Mon Mar 19 01:07:17 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 01:07:17 +0100
Subject: [Python-Dev] Re: What has become of PEP224 (attribute docstrings) ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>
Message-ID: <3AB54DB5.52254EB6@lemburg.com>

Guido van Rossum wrote:
> ...
>
> The attribute docstring PEP didn't get in (and is unlikely to get in in its
> current form) because I don't like the syntax much, *and* because the way to
> look up the docstrings is weird and ugly: you'd have to use something like
> instance.spam__doc__ or instance.__doc__spam (I forget which; they're both
> weird and ugly).

It was the only way I could think of for having attribute doc-
strings behave in the same way as e.g. methods do, that is they
should respect the class hierarchy in much the same way. This is
obviously needed if you want to document not only the method interface
of a class, but also its attributes which could be accessible from
the outside.

I am not sure whether parsing the module would enable the same
sort of functionality unless Ping's code does it's own interpretation
of imports and base class lookups.

Note that the attribute doc string attribute names are really
secondary to the PEP. The main point is using the same syntax
for attribute doc-strings as we already use for classes, modules
and functions.

> I also expect that the doc-sig will be using the same syntax (string
> literals in non-docstring positions) for a different purpose. 

I haven't seen any mention of this on the doc-sig. Could you explain
what they intend to use them for ?

> So I see
> little chance for PEP 224.  Maybe I should just pronounce on this, and
> declare the PEP rejected.

Do you have an alternative approach which meets the design goals
of the PEP ?
 
> Unless Ping thinks this would be a really cool feature to be added to pydoc?
> (Ping's going to change pydoc from importing the target module to scanning
> its surce, I believe -- then he could add this feature without changing the
> Python parser. :-)

While Ping's code is way cool, I think we shouldn't forget that
other code will also want to do its own introspection, possibly
even at run-time which is certainly not possible by (re-)parsing the
source code every time.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From bogus@does.not.exist.com  Mon Mar 19 06:16:29 2001
From: bogus@does.not.exist.com ()
Date: Mon, 19 Mar 2001 02:16:29 -0300
Subject: [Python-Dev] MUDE SUA VIDA APARTIR DE AGORA
Message-ID: <E14es4u-00044G-00@mail.python.org>

    
ENTRE NESSA MAIS NOVA MANIA ONDE OS INTERNAUTAS 
GANHAM POR APENAS ESTAR CONECTADOS A INTERNET 
!!!! EU GANHO EM MEDIA CERCA DE 2 MIL REAIS MENSAL, 
ISSO MESMO !!! GANHE VOCE TAMBEM ... O QUE VOCE 
ESTA ESPERANDO ? 'E TOTALMENTE GRATIS, NAO CUSTA 
NADA TENTAR , VOCE PERDE APENAS 5 MINUTOS DE SEU 
TEMPO PARA SE CADASTRAR, POREM NO 1 MES VOCE JA 
VE O RESULTADO ( R$ 2.000,00 ) ISSO MESMO, ISSO E'+- O 
QUE EU TIRO MENSALMENTE, EXISTE PESSOAS QUE 
CONSEGUEM O DOBRO E ATE MESMO O TRIPLO !!!! BASTA 
ENTRAR EM UM DOS SITES ABAIXO PARA COMECAR A 
GANHAR -->

www.muitodinheiro.com
www.dinheiromole.com
www.granaajato.cjb.net


ENGLISH VERSION

$$$ MAKE A LOT OF MONEY $$$



Are you of those that thinks to win money in the internet it doesn't 
pass of a farce and what will you never receive anything? 

ENTER IN -

www.muitodinheiro.com
www.dinheiromole.com
www.granaajato.cjb.net



From tim.one at home.com  Mon Mar 19 06:26:27 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 19 Mar 2001 00:26:27 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: <08e801c0b003$824f4f00$0300a8c0@arthur>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>

[Alex Martelli]
> ...
> There's another free library that interoperates with GMP to remedy
> this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
> It's also LGPL.  I haven't looked much into it as it seems it's not been
> ported to Windows yet (and that looks like quite a project) which is
> the platform I'm currently using (and, rationals do what I need:-).

Thanks for the pointer!  From a quick skim, good news & bad news (although
which is which may depend on POV):

+ The authors apparently believe their MPFR routines "should replace
  the MPF class in further releases of GMP".  Then somebody else will
  port them.

+ Allows exact specification of result precision (which will make the
  results 100% platform-independent, unlike GMP's).

+ Allows choice of IEEE 754 rounding modes (unlike GMP's truncation).

+ But is still binary floating-point.

Marc-Andre is especially interested in decimal fixed- and floating-point, and
even more specifically than that, of a flavor that will be efficient for
working with decimal types in databases (which I suspect-- but don't
know --means that I/O (conversion) costs are more important than computation
speed once converted).  GMP + MPFR don't really address the decimal part of
that.  Then again, MAL hasn't quantified any of his desires either <wink>; I
bet he'd be happier with a BCD-ish scheme.

> ...
> Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
> MPFR Python wrapper interoperating with GMPY, btw -- it lives at
> http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
> I can't run MPFR myself, as above explained).

OK, that amounts to ~200 lines of C code to wrap some of the MPFR functions
(exp, log, sqrt, sincos, agm, log2, pi, pow; many remain to be wrapped; and
they don't allow specifying precision yet).  So Pearu still has significant
work to do here, while MAL is wondering who in their right mind would want to
do *anything* with numbers except add them <wink>.

hmm-he's-got-a-good-point-there-ly y'rs  - tim




From dkwolfe at pacbell.net  Mon Mar 19 06:57:53 2001
From: dkwolfe at pacbell.net (Dan Wolfe)
Date: Sun, 18 Mar 2001 21:57:53 -0800
Subject: [Python-Dev] Makefile woos..
Message-ID: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>

While compiling the the 2.0b1 release on my shine new Mac OS X box 
today, I noticed that the fcntl module was breaking, so I went hunting 
for the cause...  (it was better than working on my taxes!)....

To make a long story short... I should have worked on my taxes ? at 
least ? 80% probability ? I understand those...

Ok, the reason that the fcntl module was breaking was that uname now 
reports Darwin 1.3 and it wasn't in the list... in the process of fixing 
that and testing to make sure that it was going to work correctly, I 
discovered that sys.platform was reporting that I was on a darwin1 
platform.... humm where did that come from...

It turns out that the MACHDEP is set correctly to Darwin1.3 when 
configuration queries the system... however, during the process of 
converting makefile.pre.in to makefile it passes thru the following SED 
script that starts around line 6284 of the configuration file:

sed 's/%@/@@/; s/@%/@@/; s/%g\$/@g/; /@g\$/s/[\\\\&%]/\\\\&/g;
  s/@@/%@/; s/@@/@%/; s/@g\$/%g/' > conftest.subs <<\\CEOF

which when applied to the Makefile.pre.in results in

MACHDEP = darwin1 instead of MACHDEP = darwin1.3

Question 1: I'm not geeky enough to understand why the '.3' get's 
removed.... is there a problem with the SED script? or did I overlook 
something?
Question 2: I noticed that all the other versions are 
<OS><MajorRevision> also - is this intentional? or is this just a result 
of the bug in the SED script

If someone can help me understand what's going on here, I'll be glad to 
submit the patch to fix the fcntl module and a few others on Mac OS X.

- Dan - who probably would have finished off his taxes if he hadn't 
opened this box....



From greg at cosc.canterbury.ac.nz  Mon Mar 19 04:02:55 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 19 Mar 2001 15:02:55 +1200 (NZST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB1ECEA.CD0FFC51@tismer.com>
Message-ID: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer at tismer.com>:

> But stopping the interpreter is a perfect unwind, and we
> can start again from anywhere.

Hmmm... Let me see if I have this correct.

You can switch from uthread A to uthread B as long
as the current depth of interpreter nesting is the
same as it was when B was last suspended. It doesn't
matter if the interpreter has returned and then
been called again, as long as it's at the same
level of nesting on the C stack.

Is that right? Is that the only restriction?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From uche.ogbuji at fourthought.com  Mon Mar 19 08:09:46 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Mon, 19 Mar 2001 00:09:46 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from "Tim Peters" <tim.one@home.com> 
   of "Sat, 17 Mar 2001 20:36:40 EST." <LNBBLJKPBEHFEDALKOLCGEGCJGAA.tim.one@home.com> 
Message-ID: <200103190709.AAA10053@localhost.localdomain>

> FYI, I pointed a correspondent to Neil's new generator patch (among other
> things), and got this back.  Not being a Web Guy at heart, I don't have a
> clue about XSLT (just enough to know that 4-letter acronyms are a webb
> abomination <wink>).
> 
> Note:  in earlier correspondence, the generator idea didn't seem to "click"
> until I called them "resumable functions" (as I often did in the past, but
> fell out of the habit).  People new to the concept often pick that up
> quicker, or even, as in this case, remember that they once rolled such a
> thing by hand out of prior necessity.
> 
> Anyway, possibly food for thought if XSLT means something to you ...

Quite interesting.  I brought up this *exact* point at the Stackless BOF at 
IPC9.  I mentioned that the immediate reason I was interested in Stackless was 
to supercharge the efficiency of 4XSLT.  I think that a stackless 4XSLT could 
pretty much annihilate the other processors in the field for performance.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From uche.ogbuji at fourthought.com  Mon Mar 19 08:15:07 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Mon, 19 Mar 2001 00:15:07 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from Paul Prescod <paulp@ActiveState.com> 
   of "Sat, 17 Mar 2001 17:50:39 PST." <3AB4146E.62AE3299@ActiveState.com> 
Message-ID: <200103190715.AAA10076@localhost.localdomain>

> I would call what you need for an efficient XSLT implementation "lazy
> lists." They are never infinite but you would rather not pre-compute
> them in advance. Often you use only the first item. Iterators probably
> would be a good implementation technique.

Well, if you don't want unmanageablecode, you could get the same benefit as 
stackless by iterating rather than recursing throuought an XSLT imlementation. 
 But why not then go farther?  Implement the whole think in raw assembler?

What Stackless would give is a way to keep good, readable execution structured 
without sacrificing performance.

XSLT interpreters are complex beasts, and I can't even imagining replacing 
4XSLT's xsl:call-template dispatch code to be purely iterative.  The result 
would be impenentrable.

But then again, this isn't exactly what you said.  I'm not sure why you think 
lazy lists would make all the difference.  Not so according to my benchmarking.

Aside: XPath node sets are one reason I've been interested in a speed and 
space-efficient set implementation for Python.  However, Guido, Tim are rather 
convincing that this is a fool's errand.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From MarkH at ActiveState.com  Mon Mar 19 10:40:24 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 20:40:24 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
Message-ID: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>

I understand the issue of "default Unicode encoding" is a loaded one,
however I believe with the Windows' file system we may be able to use a
default.

Windows provides 2 versions of many functions that accept "strings" - one
that uses "char *" arguments, and another using "wchar *" for Unicode.
Interestingly, the "char *" versions of function almost always support
"mbcs" encoded strings.

To make Python work nicely with the file system, we really should handle
Unicode characters somehow.  It is not too uncommon to find the "program
files" or the "user" directory have Unicode characters in non-english
version of Win2k.

The way I see it, to fix this we have 2 basic choices when a Unicode object
is passed as a filename:
* we call the Unicode versions of the CRTL.
* we auto-encode using the "mbcs" encoding, and still call the non-Unicode
versions of the CRTL.

The first option has a problem in that determining what Unicode support
Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
ascii versions of the functions means that the worst thing that can happen
is we get a regular file-system error if an mbcs encoded string is passed on
a non-Unicode platform.

Does anyone have any objections to this scheme or see any drawbacks in it?
If not, I'll knock up a patch...

Mark.




From mal at lemburg.com  Mon Mar 19 11:09:49 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 11:09:49 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>
Message-ID: <3AB5DAED.F7089741@lemburg.com>

Mark Hammond wrote:
> 
> I understand the issue of "default Unicode encoding" is a loaded one,
> however I believe with the Windows' file system we may be able to use a
> default.
> 
> Windows provides 2 versions of many functions that accept "strings" - one
> that uses "char *" arguments, and another using "wchar *" for Unicode.
> Interestingly, the "char *" versions of function almost always support
> "mbcs" encoded strings.
> 
> To make Python work nicely with the file system, we really should handle
> Unicode characters somehow.  It is not too uncommon to find the "program
> files" or the "user" directory have Unicode characters in non-english
> version of Win2k.
> 
> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.
> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.
> 
> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
> ascii versions of the functions means that the worst thing that can happen
> is we get a regular file-system error if an mbcs encoded string is passed on
> a non-Unicode platform.
> 
> Does anyone have any objections to this scheme or see any drawbacks in it?
> If not, I'll knock up a patch...

Hmm... the problem with MBCS is that it is not one encoding,
but can be many things. I don't know if this is an issue (can there
be more than one encoding per process ? is the encoding a user or
system setting ? does the CRT know which encoding to use/assume ?),
but the Unicode approach sure sounds a lot safer.

Also, what would os.listdir() return ? Unicode strings or 8-bit
strings ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From MarkH at ActiveState.com  Mon Mar 19 11:34:46 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 21:34:46 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <3AB5DAED.F7089741@lemburg.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPMEDHDGAA.MarkH@ActiveState.com>

> Hmm... the problem with MBCS is that it is not one encoding,
> but can be many things.

Yeah, but I think specifically with filenames this is OK.  We would be
translating from Unicode objects using MBCS in the knowledge that somewhere
in the Win32 maze they will be converted back to Unicode, using MBCS, to
access the Unicode based filesystem.

At the moment, you just get an exception - the dreaded "ASCII encoding
error: ordinal not in range(128)" :)

I don't see the harm - we are making no assumptions about the user's data,
just about the platform.  Note that I never want to assume a string object
is in a particular encoding - just assume that the CRTL file functions can
handle a particular encoding for their "filename" parameter.  I don't want
to handle Unicode objects in any "data" params, just the "filename".

Mark.




From MarkH at ActiveState.com  Mon Mar 19 11:53:01 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Mon, 19 Mar 2001 21:53:01 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <3AB5DAED.F7089741@lemburg.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>

Sorry, I notice I didn't answer your specific question:

> Also, what would os.listdir() return ? Unicode strings or 8-bit
> strings ?

This would not change.

This is what my testing shows:

* I can switch to a German locale, and create a file using the keystrokes
"`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
last characters.

* os.listdir() returns '\xe0test\xf2' for this file.

* That same string can be passed to "open" etc to open the file.

* The only way to get that string to a Unicode object is to use the
encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
least it has a hope of handling non-latin characters :)

So - assume I am passed a Unicode object that represents this filename.  At
the moment we simply throw that exception if we pass that Unicode object to
open().  I am proposing that "mbcs" be used in this case instead of the
default "ascii"

If nothing else, my idea could be considered a "short-term" solution.  If
ever it is found to be a problem, we can simply move to the unicode APIs,
and nothing would break - just possibly more things _would_ work :)

Mark.




From mal at lemburg.com  Mon Mar 19 12:17:18 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:17:18 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
Message-ID: <3AB5EABE.CE4C5760@lemburg.com>

Mark Hammond wrote:
> 
> Sorry, I notice I didn't answer your specific question:
> 
> > Also, what would os.listdir() return ? Unicode strings or 8-bit
> > strings ?
> 
> This would not change.
> 
> This is what my testing shows:
> 
> * I can switch to a German locale, and create a file using the keystrokes
> "`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
> last characters.
> 
> * os.listdir() returns '\xe0test\xf2' for this file.
> 
> * That same string can be passed to "open" etc to open the file.
> 
> * The only way to get that string to a Unicode object is to use the
> encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
> least it has a hope of handling non-latin characters :)
> 
> So - assume I am passed a Unicode object that represents this filename.  At
> the moment we simply throw that exception if we pass that Unicode object to
> open().  I am proposing that "mbcs" be used in this case instead of the
> default "ascii"
> 
> If nothing else, my idea could be considered a "short-term" solution.  If
> ever it is found to be a problem, we can simply move to the unicode APIs,
> and nothing would break - just possibly more things _would_ work :)

Sounds like a good idea. We'd only have to assure that whatever
os.listdir() returns can actually be used to open the file, but that
seems to be the case... at least for Latin-1 chars (I wonder how
well this behaves with Japanese chars).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Mar 19 12:34:30 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:34:30 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com>
Message-ID: <3AB5EEC6.F5D6FE3B@lemburg.com>

Tim Peters wrote:
> 
> [Alex Martelli]
> > ...
> > There's another free library that interoperates with GMP to remedy
> > this -- it's called MPFR and lives at http://www.loria.fr/projets/mpfr/.
> > It's also LGPL.  I haven't looked much into it as it seems it's not been
> > ported to Windows yet (and that looks like quite a project) which is
> > the platform I'm currently using (and, rationals do what I need:-).
> 
> Thanks for the pointer!  From a quick skim, good news & bad news (although
> which is which may depend on POV):
> 
> + The authors apparently believe their MPFR routines "should replace
>   the MPF class in further releases of GMP".  Then somebody else will
>   port them.

...or simply install both packages...
 
> + Allows exact specification of result precision (which will make the
>   results 100% platform-independent, unlike GMP's).

This is a Good Thing :)
 
> + Allows choice of IEEE 754 rounding modes (unlike GMP's truncation).
> 
> + But is still binary floating-point.

:-(
 
> Marc-Andre is especially interested in decimal fixed- and floating-point, and
> even more specifically than that, of a flavor that will be efficient for
> working with decimal types in databases (which I suspect-- but don't
> know --means that I/O (conversion) costs are more important than computation
> speed once converted).  GMP + MPFR don't really address the decimal part of
> that.  Then again, MAL hasn't quantified any of his desires either <wink>; I
> bet he'd be happier with a BCD-ish scheme.

The ideal solution for my needs would be an implementation which
allows:

* fast construction of decimals using string input
* fast decimal string output
* good interaction with the existing Python numeric types

BCD-style or simple decimal string style implementations serve
these requirements best, but GMP or MPFR 
 
> > ...
> > Hmmm, port MPFR everywhere...?-)  Pearu Peterson already did the
> > MPFR Python wrapper interoperating with GMPY, btw -- it lives at
> > http://cens.ioc.ee/~pearu/misc/gmpy-mpfr/ (haven't tested it as
> > I can't run MPFR myself, as above explained).
> 
> OK, that amounts to ~200 lines of C code to wrap some of the MPFR functions
> (exp, log, sqrt, sincos, agm, log2, pi, pow; many remain to be wrapped; and
> they don't allow specifying precision yet).  So Pearu still has significant
> work to do here, while MAL is wondering who in their right mind would want to
> do *anything* with numbers except add them <wink>.

Right: as long as there is a possibility to convert these decimals to 
Python floats or integers (or longs) I don't really care ;)

Seriously, I think that the GMP lib + MPFR lib provide a very
good basis to do work with numbers on Unix. Unfortunately, they
don't look very portable (given all that assembler code in there
and the very Unix-centric build system).

Perhaps we'd need a higher level interface to all of this which
can then take GMP or some home-grown "port" of the Python long
implementation to base-10 as backend to do the actual work.

It would have to provide these types:
 Integer - arbitrary precision integers
 Rational - dito for rational numbers
 Float - dito for floating point numbers

Integration with Python is easy given the new coercion mechanism
at C level. The problem I see is how to define coercion order, i.e.
Integer + Rational should produce a Rational, but what about
Rational + Float or Float + Python float or Integer + Python float ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Mon Mar 19 12:38:31 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 12:38:31 +0100
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
References: <LNBBLJKPBEHFEDALKOLCAEIEJGAA.tim_one@email.msn.com>
Message-ID: <3AB5EFB7.2E2AAED0@lemburg.com>

Tim Peters wrote:
> 
> [M.-A. Lemburg]
> > Looking around some more on the web, I found that the GNU MP (GMP)
> > lib has switched from being GPLed to LGPLed,
> 
> Right.
> 
> > meaning that it can actually be used by non-GPLed code as long as
> > the source code for the GMP remains publically accessible.
> 
> Ask Stallman <0.9 wink>.
> 
> > ...
> > Since the GMP offers arbitrary precision numbers and also has
> > a rational number implementation I wonder if we could use it
> > in Python to support fractions and arbitrary precision
> > floating points ?!
> 
> Note that Alex Martelli runs the "General Multiprecision Python" project on
> SourceForge:
> 
>     http://gmpy.sourceforge.net/
> 
> He had a severe need for fast rational arithmetic in his Python programs, so
> starting wrapping the full GMP out of necessity.

I found that link after hacking away at yet another GMP
wrapper for three hours Friday night... turned out to be a nice
proof of concept, but also showed some issues with respect to
coercion (see my other reply).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From gherman at darwin.in-berlin.de  Mon Mar 19 12:57:49 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 12:57:49 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
Message-ID: <3AB5F43D.E33B188D@darwin.in-berlin.de>

I wrote on comp.lang.python today:
> 
> is there a simple way (or any way at all) to find out for 
> any given hard disk how much free space is left on that
> device? I looked into the os module, but either not hard
> enough or there is no such function. Of course, the ideal
> solution would be platform-independant, too... :)

Is there any good reason for not having a cross-platform
solution to this? I'm certainly not the first to ask for
such a function and it certainly exists for all platforms,
doesn't it?

Unfortunately, OS problems like that make it rather impossi-
ble to write truly cross-platform applications in Python, 
even if it is touted to be exactly that.

I know that OS differ in the services they provide, but in
this case it seems to me that each one *must* have such a 
function, so I don't understand why it's not there...

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From thomas at xs4all.net  Mon Mar 19 13:07:13 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:07:13 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F43D.E33B188D@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 12:57:49PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de>
Message-ID: <20010319130713.M29286@xs4all.nl>

On Mon, Mar 19, 2001 at 12:57:49PM +0100, Dinu Gherman wrote:
> I wrote on comp.lang.python today:
> > is there a simple way (or any way at all) to find out for 
> > any given hard disk how much free space is left on that
> > device? I looked into the os module, but either not hard
> > enough or there is no such function. Of course, the ideal
> > solution would be platform-independant, too... :)

> Is there any good reason for not having a cross-platform
> solution to this? I'm certainly not the first to ask for
> such a function and it certainly exists for all platforms,
> doesn't it?

I think the main reason such a function does not exist is that no-one wrote
it. If you can write a portable function, or fake one by making different
implementations on different platforms, please contribute ;) Step one is
making an inventory of the available functions, though, so you know how
large an intersection you have to work with. The fact that you have to start
that study is probably the #1 reason no-one's done it yet :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From nhodgson at bigpond.net.au  Mon Mar 19 13:06:40 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Mon, 19 Mar 2001 23:06:40 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com>
Message-ID: <09c001c0b06d$0f359eb0$8119fea9@neil>

Mark Hammond:

> To make Python work nicely with the file system, we really
> should handle Unicode characters somehow.  It is not too
> uncommon to find the "program files" or the "user" directory
> have Unicode characters in non-english version of Win2k.

   The "program files" and "user" directory should still have names
representable in the normal locale used by the user so they are able to
access them by using their standard encoding in a Python narrow character
string to the open function.

> The way I see it, to fix this we have 2 basic choices when a Unicode
object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.

   This is by far the better approach IMO as it is more general and will
work for people who switch locales or who want to access files created by
others using other locales. Although you can always use the horrid mangled
"*~1" names.

> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.

   This will improve things but to a lesser extent than the above. May be
the best possible on 95.

> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.

    None of the *W file calls are listed as supported by 95 although Unicode
file names can certainly be used on FAT partitions.

> * I can switch to a German locale, and create a file using the
> keystrokes "`atest`o".  The "`" is the dead-char so I get an
> umlaut over the first and last characters.

   Its more fun playing with a non-roman locale, and one that doesn't fit in
the normal Windows code page for this sort of problem. Russian is reasonably
readable for us English speakers.

M.-A. Lemburg:
> I don't know if this is an issue (can there
> be more than one encoding per process ?

   There is an input locale and keyboard layout per thread.

> is the encoding a user or system setting ?

   There are system defaults and a menu through which you can change the
locale whenever you want.

> Also, what would os.listdir() return ? Unicode strings or 8-bit
> strings ?

   There is the Windows approach of having an os.listdirW() ;) .

   Neil






From thomas at xs4all.net  Mon Mar 19 13:13:26 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:13:26 +0100
Subject: [Python-Dev] Makefile woos..
In-Reply-To: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>; from dkwolfe@pacbell.net on Sun, Mar 18, 2001 at 09:57:53PM -0800
References: <0GAF003LYKEZSF@mta5.snfc21.pbi.net>
Message-ID: <20010319131325.N29286@xs4all.nl>

On Sun, Mar 18, 2001 at 09:57:53PM -0800, Dan Wolfe wrote:

> Question 1: I'm not geeky enough to understand why the '.3' get's 
> removed.... is there a problem with the SED script? or did I overlook 
> something?
> Question 2: I noticed that all the other versions are 
> <OS><MajorRevision> also - is this intentional? or is this just a result 
> of the bug in the SED script

I believe it's intentional. I'm pretty sure it'll break stuff if it's
changed, in any case. It relies on the convention that the OS release
numbers actually mean something: nothing serious changes when the minor
version number is upped, so there is no need to have a separate architecture
directory for it.

> If someone can help me understand what's going on here, I'll be glad to 
> submit the patch to fix the fcntl module and a few others on Mac OS X.

Are you sure the 'darwin1' arch name is really the problem ? As long as you
have that directory, which should be filled by 'make Lib/plat-darwin1' and
by 'make install' (but not by 'make test', unfortunately) it shouldn't
matter.

(So my guess is: you're doing configure, make, make test, and the
plat-darwin1 directory isn't made then, so tests that rely (indirectly) on
it will fail. Try using 'make plat-darwin1' before 'make test'.)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gherman at darwin.in-berlin.de  Mon Mar 19 13:21:44 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 13:21:44 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl>
Message-ID: <3AB5F9D8.74F0B55F@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> I think the main reason such a function does not exist is that no-one wrote
> it. If you can write a portable function, or fake one by making different
> implementations on different platforms, please contribute ;) Step one is
> making an inventory of the available functions, though, so you know how
> large an intersection you have to work with. The fact that you have to start
> that study is probably the #1 reason no-one's done it yet :)

Well, this is the usual "If you need it, do it yourself!"
answer, that bites the one who dares to speak up for all
those hundreds who don't... isn't it?

Rather than asking one non-expert in N-1 +/- 1 operating
systems to implement it, why not ask N experts in imple-
menting Python on 1 platform to do the job? (Notice the
potential for parallelism?! :)

Uhmm, seriously, does it really take 10 years for such an 
issue to creep up high enough on the priority ladder of 
Python-Labs? 

In any case it doesn't sound like a Python 3000 feature to 
me, or maybe it should?

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From mal at lemburg.com  Mon Mar 19 13:34:45 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 13:34:45 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <3AB5FCE5.92A133AB@lemburg.com>

Dinu Gherman wrote:
> 
> Thomas Wouters wrote:
> >
> > I think the main reason such a function does not exist is that no-one wrote
> > it. If you can write a portable function, or fake one by making different
> > implementations on different platforms, please contribute ;) Step one is
> > making an inventory of the available functions, though, so you know how
> > large an intersection you have to work with. The fact that you have to start
> > that study is probably the #1 reason no-one's done it yet :)
> 
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?
> 
> Rather than asking one non-expert in N-1 +/- 1 operating
> systems to implement it, why not ask N experts in imple-
> menting Python on 1 platform to do the job? (Notice the
> potential for parallelism?! :)

I think the problem with this one really is the differences
in OS designs, e.g. on Windows you have the concept of drive
letters where on Unix you have mounted file systems. Then there
also is the concept of disk space quota per user which would
have to be considered too.

Also, calculating the available disk space may return false
results (e.g. for Samba shares).

Perhaps what we really need is some kind of probing function
which tests whether a certain amount of disk space would be
available ?!

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Mon Mar 19 13:43:23 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 13:43:23 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F9D8.74F0B55F@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 01:21:44PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <20010319134323.W27808@xs4all.nl>

On Mon, Mar 19, 2001 at 01:21:44PM +0100, Dinu Gherman wrote:
> Thomas Wouters wrote:
> > 
> > I think the main reason such a function does not exist is that no-one wrote
> > it. If you can write a portable function, or fake one by making different
> > implementations on different platforms, please contribute ;) Step one is
> > making an inventory of the available functions, though, so you know how
> > large an intersection you have to work with. The fact that you have to start
> > that study is probably the #1 reason no-one's done it yet :)
> 
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?
> 
> Rather than asking one non-expert in N-1 +/- 1 operating
> systems to implement it, why not ask N experts in imple-
> menting Python on 1 platform to do the job? (Notice the
> potential for parallelism?! :)
> 
> Uhmm, seriously, does it really take 10 years for such an 
> issue to creep up high enough on the priority ladder of 
> Python-Labs? 

> In any case it doesn't sound like a Python 3000 feature to 
> me, or maybe it should?

Nope. But you seem to misunderstand the idea behind Python development (and
most of open-source development.) PythonLabs has a *lot* of stuff they have
to do, and you cannot expect them to do everything. Truth is, this is not
likely to be done by Pythonlabs, and it will never be done unless someone
does it. It might sound harsh and unfriendly, but it's just a fact. It
doesn't mean *you* have to do it, but that *someone* has to do it. Feel free
to find someone to do it :)

As for the parallelism: that means getting even more people to volunteer for
the task. And the person(s) doing it still have to figure out the common
denominators in 'get me free disk space info'.

And the fact that it's *been* 10 years shows that noone cares enough about
the free disk space issue to actually get people to code it. 10 years filled
with a fair share of C programmers starting to use Python, so plenty of
those people could've done it :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Mon Mar 19 13:57:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 07:57:09 -0500
Subject: [Python-numerics]Re: [Python-Dev] Re: WYSIWYG decimal fractions
In-Reply-To: Your message of "Mon, 19 Mar 2001 00:26:27 EST."
             <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCIEJEJGAA.tim.one@home.com> 
Message-ID: <200103191257.HAA25649@cj20424-a.reston1.va.home.com>

Is there any point still copying this thread to both
python-dev at python.org and python-numerics at lists.sourceforge.net?

It's best to move it to the latter, I "pronounce". :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From gherman at darwin.in-berlin.de  Mon Mar 19 13:58:48 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 13:58:48 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl>
Message-ID: <3AB60288.2915DF32@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> Nope. But you seem to misunderstand the idea behind Python development (and
> most of open-source development.) 

Not sure what makes you think that, but anyway.

> PythonLabs has a *lot* of stuff they have
> to do, and you cannot expect them to do everything. Truth is, this is not
> likely to be done by Pythonlabs, and it will never be done unless someone
> does it.

Apparently, I agree, I know less about what makes truth here. 
What is probably valid is that having much to do is true for 
everybody and not much of an argument, is it?

> As for the parallelism: that means getting even more people to volunteer for
> the task. And the person(s) doing it still have to figure out the common
> denominators in 'get me free disk space info'.

I'm afraid this is like argueing in circles.

> And the fact that it's *been* 10 years shows that noone cares enough about
> the free disk space issue to actually get people to code it. 10 years filled
> with a fair share of C programmers starting to use Python, so plenty of
> those people could've done it :)

I'm afraid, again, but the impression you have of nobody in ten
years asking for this function is just that, an impression, 
unless *somebody* prooves the contrary. 

All I can say is that I'm writing an app that I want to be 
cross-platform and that Python does not allow it to be just 
that, while Google gives you 17400 hits if you look for 
"python cross-platform". Now, this is also some kind of 
*truth* if only one of a mismatch between reality and wish-
ful thinking...

Regards,

Dinu



From guido at digicool.com  Mon Mar 19 14:00:44 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 08:00:44 -0500
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: Your message of "Mon, 19 Mar 2001 15:02:55 +1200."
             <200103190302.PAA06055@s454.cosc.canterbury.ac.nz> 
References: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103191300.IAA25681@cj20424-a.reston1.va.home.com>

> Christian Tismer <tismer at tismer.com>:
> 
> > But stopping the interpreter is a perfect unwind, and we
> > can start again from anywhere.
> 
> Hmmm... Let me see if I have this correct.
> 
> You can switch from uthread A to uthread B as long
> as the current depth of interpreter nesting is the
> same as it was when B was last suspended. It doesn't
> matter if the interpreter has returned and then
> been called again, as long as it's at the same
> level of nesting on the C stack.
> 
> Is that right? Is that the only restriction?

I doubt it.  To me (without a lot of context, but knowing ceval.c :-)
it would make more sense if the requirement was that there were no C
stack frames involved in B -- only Python frames.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Mon Mar 19 14:07:25 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 14:07:25 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <3AB5FCE5.92A133AB@lemburg.com> <3AB5FFB8.E138160A@darwin.in-berlin.de>
Message-ID: <3AB6048D.4E24AC4F@lemburg.com>

Dinu Gherman wrote:
> 
> "M.-A. Lemburg" wrote:
> >
> > I think the problem with this one really is the differences
> > in OS designs, e.g. on Windows you have the concept of drive
> > letters where on Unix you have mounted file systems. Then there
> > also is the concept of disk space quota per user which would
> > have to be considered too.
> 
> I'd be perfectly happy with something like this:
> 
>   import os
>   free = os.getfreespace('c:\\')          # on Win
>   free = os.getfreespace('/hd5')          # on Unix-like boxes
>   free = os.getfreespace('Mactintosh HD') # on Macs
>   free = os.getfreespace('ZIP-1')         # on Macs, Win, ...
> 
> etc. where the string passed is, a-priori, a name known
> by the OS for some permanent or removable drive. Network
> drives might be slightly more tricky, but probably not
> entirely impossible, I guess.

This sounds like a lot of different platform C APIs would need
to be wrapped first, e.g. quotactrl, getrlimit (already done)
+ a bunch of others since "get free space" is usually a file system
dependent call.

I guess we should take a look at how "df" does this on Unix
and maybe trick Mark Hammond into looking up the win32 API ;-)

> > Perhaps what we really need is some kind of probing function
> > which tests whether a certain amount of disk space would be
> > available ?!
> 
> Something like incrementally stuffing it with junk data until
> you get an exception, right? :)

Yep. Actually opening a file in record mode and then using
file.seek() should work on many platforms.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From fredrik at pythonware.com  Mon Mar 19 14:04:59 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 19 Mar 2001 14:04:59 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de>
Message-ID: <029401c0b075$3c18e2e0$0900a8c0@SPIFF>

dinu wrote:
> Well, this is the usual "If you need it, do it yourself!"
> answer, that bites the one who dares to speak up for all
> those hundreds who don't... isn't it?

fwiw, Python already supports this for real Unix platforms:

>>> os.statvfs("/")    
(8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)

here, the root disk holds 524288x512 bytes, with 348336x512
bytes free for the current user, and 365788x512 bytes available
for root.

(the statvfs module contains indices for accessing this "struct")

Implementing a small subset of statvfs for Windows wouldn't
be that hard (possibly returning None for fields that don't make
sense, or are too hard to figure out).

(and with win32all, I'm sure it can be done without any C code).

Cheers /F




From guido at digicool.com  Mon Mar 19 14:12:58 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 08:12:58 -0500
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: Your message of "Mon, 19 Mar 2001 21:53:01 +1100."
             <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com> 
References: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com> 
Message-ID: <200103191312.IAA25747@cj20424-a.reston1.va.home.com>

> > Also, what would os.listdir() return ? Unicode strings or 8-bit
> > strings ?
> 
> This would not change.
> 
> This is what my testing shows:
> 
> * I can switch to a German locale, and create a file using the keystrokes
> "`atest`o".  The "`" is the dead-char so I get an umlaut over the first and
> last characters.

(Actually, grave accents, but I'm sure that to Aussie eyes, as to
Americans, they's all Greek. :-)

> * os.listdir() returns '\xe0test\xf2' for this file.

I don't understand.  This is a Latin-1 string.  Can you explain again
how the MBCS encoding encodes characters outside the Latin-1 range?

> * That same string can be passed to "open" etc to open the file.
> 
> * The only way to get that string to a Unicode object is to use the
> encodings "Latin1" or "mbcs".  Of them, "mbcs" would have to be safer, as at
> least it has a hope of handling non-latin characters :)
> 
> So - assume I am passed a Unicode object that represents this filename.  At
> the moment we simply throw that exception if we pass that Unicode object to
> open().  I am proposing that "mbcs" be used in this case instead of the
> default "ascii"
> 
> If nothing else, my idea could be considered a "short-term" solution.  If
> ever it is found to be a problem, we can simply move to the unicode APIs,
> and nothing would break - just possibly more things _would_ work :)

I have one more question.  The plan looks decent, but I don't know the
scope.  Which calls do you plan to fix?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From thomas at xs4all.net  Mon Mar 19 14:18:34 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Mon, 19 Mar 2001 14:18:34 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB60288.2915DF32@darwin.in-berlin.de>; from gherman@darwin.in-berlin.de on Mon, Mar 19, 2001 at 01:58:48PM +0100
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl> <3AB60288.2915DF32@darwin.in-berlin.de>
Message-ID: <20010319141834.X27808@xs4all.nl>

On Mon, Mar 19, 2001 at 01:58:48PM +0100, Dinu Gherman wrote:

> All I can say is that I'm writing an app that I want to be 
> cross-platform and that Python does not allow it to be just 
> that, while Google gives you 17400 hits if you look for 
> "python cross-platform". Now, this is also some kind of 
> *truth* if only one of a mismatch between reality and wish-
> ful thinking...

I'm sure I agree, but I don't see the value in dropping everything to write
a function so Python can be that much more cross-platform. (That's just me,
though.) Python wouldn't *be* as cross-platform as it is now if not for a
group of people who weren't satisfied with it, and improved on it. And a lot
of those people were not Guido or even of the current PythonLabs team.

I've never really believed in the 'true cross-platform nature' of Python,
mostly because I know it can't *really* be true. Most of my scripts are not
portably to non-UNIX platforms, due to the use of sockets, pipes, and
hardcoded filepaths (/usr/...). Even if I did, I can hardly agree that
because there is no portable way (if any at all) to find out howmany
diskspace is free, it isn't cross-platform. Just *because* it lacks that
function makes it more cross-platform: platforms might not have the concept
of 'free space' :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From gherman at darwin.in-berlin.de  Mon Mar 19 14:23:51 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 14:23:51 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <20010319134323.W27808@xs4all.nl> <3AB60288.2915DF32@darwin.in-berlin.de> <20010319141834.X27808@xs4all.nl>
Message-ID: <3AB60867.3D2A9DF@darwin.in-berlin.de>

Thomas Wouters wrote:
> 
> I've never really believed in the 'true cross-platform nature' of Python,
> mostly because I know it can't *really* be true. Most of my scripts are not
> portably to non-UNIX platforms, due to the use of sockets, pipes, and
> hardcoded filepaths (/usr/...). Even if I did, I can hardly agree that
> because there is no portable way (if any at all) to find out howmany
> diskspace is free, it isn't cross-platform. Just *because* it lacks that
> function makes it more cross-platform: platforms might not have the concept
> of 'free space' :)

Hmm, that means we better strip the standard library off
most of its modules (why not all?), because the less 
content there is, the more cross-platform it will be, 
right?

Well, if the concept is not there, simply throw a neat 
ConceptException! ;-)

Dinu



From gherman at darwin.in-berlin.de  Mon Mar 19 14:32:17 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 14:32:17 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
Message-ID: <3AB60A61.A4BB2768@darwin.in-berlin.de>

Fredrik Lundh wrote:
> 
> fwiw, Python already supports this for real Unix platforms:
> 
> >>> os.statvfs("/")
> (8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)
> 
> here, the root disk holds 524288x512 bytes, with 348336x512
> bytes free for the current user, and 365788x512 bytes available
> for root.
> 
> (the statvfs module contains indices for accessing this "struct")
> 
> Implementing a small subset of statvfs for Windows wouldn't
> be that hard (possibly returning None for fields that don't make
> sense, or are too hard to figure out).
> 
> (and with win32all, I'm sure it can be done without any C code).
> 
> Cheers /F

Everything correct! 

I'm just trying to make the point that from a user perspective 
it would be more complete to have such a function in the os 
module (where it belongs), that would also work on Macs e.g., 
as well as more conveniant, because even when that existed in 
modules like win32api (where it does) and in one of the (many) 
mac* ones (which I don't know yet if it does) it would save 
you the if-statement on sys.platform.

It sounds silly to me if people now pushed into learning Py-
thon as a first programming language had to use such state-
ments to get along, but were given the 'gift' of 1/2 = 0.5
which we seem to spend an increasing amount of brain cycles
on...

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From Greg.Wilson at baltimore.com  Mon Mar 19 14:32:21 2001
From: Greg.Wilson at baltimore.com (Greg Wilson)
Date: Mon, 19 Mar 2001 08:32:21 -0500
Subject: [Python-Dev] BOOST Python library
Message-ID: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>

Might be of interest to people binding C++ to Python...

http://www.boost.org/libs/python/doc/index.html

Greg

By the way, http://mail.python.org/pipermail/python-list/
now seems to include archives for February 2005.  Is this
another "future" import?





From tismer at tismer.com  Mon Mar 19 14:46:19 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 14:46:19 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103190302.PAA06055@s454.cosc.canterbury.ac.nz> <200103191300.IAA25681@cj20424-a.reston1.va.home.com>
Message-ID: <3AB60DAB.D92D12BF@tismer.com>


Guido van Rossum wrote:
> 
> > Christian Tismer <tismer at tismer.com>:
> >
> > > But stopping the interpreter is a perfect unwind, and we
> > > can start again from anywhere.
> >
> > Hmmm... Let me see if I have this correct.
> >
> > You can switch from uthread A to uthread B as long
> > as the current depth of interpreter nesting is the
> > same as it was when B was last suspended. It doesn't
> > matter if the interpreter has returned and then
> > been called again, as long as it's at the same
> > level of nesting on the C stack.
> >
> > Is that right? Is that the only restriction?
> 
> I doubt it.  To me (without a lot of context, but knowing ceval.c :-)
> it would make more sense if the requirement was that there were no C
> stack frames involved in B -- only Python frames.

Right. And that is only a dynamic restriction. It does not
matter how and where frames were created, it is just impossible
to jump at a frame that is held by an interpreter on the C stack.
The key to circumvent this (and the advantage of uthreads) is
to not enforce a jump from a nested interpreter, but to initiate
that it will happen. That is, the scheduling interpreter
does the switch, not the nested one.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From fredrik at pythonware.com  Mon Mar 19 14:54:03 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Mon, 19 Mar 2001 14:54:03 +0100
Subject: [Python-Dev] BOOST Python library
References: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>
Message-ID: <02ba01c0b07c$0ff8c9d0$0900a8c0@SPIFF>

greg wrote:
> By the way, http://mail.python.org/pipermail/python-list/
> now seems to include archives for February 2005.  Is this
> another "future" import?

did you read the post?




From gmcm at hypernet.com  Mon Mar 19 15:27:04 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Mon, 19 Mar 2001 09:27:04 -0500
Subject: [Python-Dev] Function in os module for available disk space, why  not?
In-Reply-To: <3AB60A61.A4BB2768@darwin.in-berlin.de>
Message-ID: <3AB5D0E8.16418.990252B8@localhost>

Dinu Gherman wrote:

[disk free space...]
> I'm just trying to make the point that from a user perspective it
> would be more complete to have such a function in the os module
> (where it belongs), that would also work on Macs e.g., as well as
> more conveniant, because even when that existed in modules like
> win32api (where it does) and in one of the (many) mac* ones
> (which I don't know yet if it does) it would save you the
> if-statement on sys.platform.

Considering that:
 - it's not uncommon to map things into the filesystem's 
namespace for which "free space" is meaningless
 - for network mapped storage space it's quite likely you can't 
get a meaningful number
 - for compressed file systems the number will be inaccurate
 - even if you get an accurate answer, the space may not be 
there when you go to use it (so need try... except anyway)

I find it perfectly sensible that Python does not dignify this 
mess with an official function.

- Gordon



From guido at digicool.com  Mon Mar 19 15:58:29 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 09:58:29 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: Your message of "Mon, 19 Mar 2001 14:32:17 +0100."
             <3AB60A61.A4BB2768@darwin.in-berlin.de> 
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>  
            <3AB60A61.A4BB2768@darwin.in-berlin.de> 
Message-ID: <200103191458.JAA26035@cj20424-a.reston1.va.home.com>

> I'm just trying to make the point that from a user perspective 
> it would be more complete to have such a function in the os 
> module (where it belongs), that would also work on Macs e.g., 
> as well as more conveniant, because even when that existed in 
> modules like win32api (where it does) and in one of the (many) 
> mac* ones (which I don't know yet if it does) it would save 
> you the if-statement on sys.platform.

Yeah, yeah, yeah.  Whine, whine, whine.  As has been made abundantly
clear, doing this cross-platform requires a lot of detailed platform
knowledge.  We at PythonLabs don't have all the wisdom, and we often
rely on outsiders to help us out.  Until now, finding out how much
free space there is on a disk hasn't been requested much (in fact I
don't recall seeing a request for it before).  That's why it isn't
already there -- that plus the fact that traditionally on Unix this
isn't easy to find out (statvfs didn't exist when I wrote most of the
posix module).  I'm not against adding it, but I'm not particularly
motivated to add it myself because I have too much to do already (and
the same's true for all of us here at PythonLabs).

> It sounds silly to me if people now pushed into learning Py-
> thon as a first programming language had to use such state-
> ments to get along, but were given the 'gift' of 1/2 = 0.5
> which we seem to spend an increasing amount of brain cycles
> on...

I would hope that you agree with me though that the behavior of
numbers is a lot more fundamental to education than finding out
available disk space.  The latter is just a system call of use to a
small number of professionals.  The former has usability implications
for all Python users.

--Guido van Rossum (home page: http://www.python.org/~guido/)




From gherman at darwin.in-berlin.de  Mon Mar 19 16:32:51 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 16:32:51 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  
 not?
References: <3AB5D0E8.16418.990252B8@localhost>
Message-ID: <3AB626A3.CA4B6174@darwin.in-berlin.de>

Gordon McMillan wrote:
> 
> Considering that:
>  - it's not uncommon to map things into the filesystem's
>    namespace for which "free space" is meaningless

Unless I'm totally stupid, I see the concept of "free space" as
being tied to the *device*, not to anything being mapped to it 
or not.

>  - for network mapped storage space it's quite likely you can't
>    get a meaningful number

Fine, then let's play the exception blues...

>  - for compressed file systems the number will be inaccurate

Then why is the OS function call there...? And: nobody can *seri-
ously* expect an accurate figure of the remaining space for com-
pressed file systems, anyway, and I think nobody does! But there
will always be some number >= 0 of uncompressed available bytes 
left.

>  - even if you get an accurate answer, the space may not be
>    there when you go to use it (so need try... except anyway)

The same holds for open(path, 'w') - and still this function is 
considered useful, isn't it?!

> I find it perfectly sensible that Python does not dignify this
> mess with an official function.

Well, I have yet to see a good argument against this...

Regards,

Dinu



From mal at lemburg.com  Mon Mar 19 16:46:34 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 16:46:34 +0100
Subject: [Python-Dev] BOOST Python library
References: <930BBCA4CEBBD411BE6500508BB3328F1AC817@nsamcanms1.ca.baltimore.com>
Message-ID: <3AB629DA.52C72E57@lemburg.com>

Greg Wilson wrote:
> 
> Might be of interest to people binding C++ to Python...
> 
> http://www.boost.org/libs/python/doc/index.html

Could someone please add links to all the tools they mention
in their comparison to the c++-sig page (not even SWIG is mentioned
there).

  http://www.boost.org/libs/python/doc/comparisons.html

BTW, most SIG have long expired... I guess bumbing the year from
2000 to 2002 would help ;-)

Thanks,
-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From tismer at tismer.com  Mon Mar 19 16:49:37 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 16:49:37 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com>
Message-ID: <3AB62A91.1DBE7F8B@tismer.com>


Neil Schemenauer wrote:
> 
> I've got a different implementation.  There are no new keywords
> and its simpler to wrap a high level interface around the low
> interface.
> 
>     http://arctrix.com/nas/python/generator2.diff
> 
> What the patch does:
> 
>     Split the big for loop and switch statement out of eval_code2
>     into PyEval_EvalFrame.
> 
>     Add a new "why" flag for ceval, WHY_SUSPEND.  It is similar to
>     WHY_RETURN except that the frame value stack and the block stack
>     are not touched.  The frame is also marked resumable before
>     returning (f_stackbottom != NULL).
> 
>     Add two new methods to frame objects, suspend and resume.
>     suspend takes one argument which gets attached to the frame
>     (f_suspendvalue).  This tells ceval to suspend as soon as control
>     gets back to this frame.  resume, strangely enough, resumes a
>     suspended frame.  Execution continues at the point it was
>     suspended.  This is done by calling PyEval_EvalFrame on the frame
>     object.
> 
>     Make frame_dealloc clean up the stack and decref f_suspendvalue
>     if it exists.
> 
> There are probably still bugs and it slows down ceval too much
> but otherwise things are looking good.  Here are some examples
> (the're a little long and but illustrative).  Low level
> interface, similar to my last example:

I've had a closer look at your patch (without actually applying
and running it) and it looks good to me.
A possible bug may be in frame_resume, where you are doing
+       f->f_back = tstate->frame;
without taking care of the prior value of f_back.

There is a little problem with your approach, which I have
to mention: I believe, without further patching it will be
easy to crash Python.
By giving frames the suspend and resume methods, you are
opening frames to everybody in a way that allows to treat
them as kind of callable objects. This is the same problem
that Stackless had imposed.
By doing so, it might be possible to call any frame, also
if it is currently run by a nested interpreter.

I see two solutions to get out of this:

1) introduce a lock flag for frames which are currently
   executed by some interpreter on the C stack. This is
   what Stackless does currently.
   Maybe you can just use your new f_suspendvalue field.
   frame_resume must check that this value is not NULL
   on entry, and set it zero before resuming.
   See below for more.

2) Do not expose the resume and suspend methods to the
   Python user, and recode Generator.py as an extension
   module in C. This should prevent abuse of frames.

Proposal for a different interface:
I would change the interface of PyEval_EvalFrame
to accept a return value passed in, like Stackless
has its "passed_retval", and maybe another variable
that explicitly tells the kind of the frame call,
i.e. passing the desired why_code. This also would
make it easier to cope with the other needs of Stackless
later in a cleaner way.
Well, I see you are clearing the f_suspendvalue later.
Maybe just adding the why_code to the parameters
would do. f_suspendvalue can be used for different
things, it can also become the place to store a return
value, or a coroutine transfer parameter.

In the future, there will not obly be the suspend/resume
interface. Frames will be called for different reasons:
suspend  with a value  (generators)
return   with a value  (normal function calls)
transfer with a value  (coroutines)
transfer with no value (microthreads)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From moshez at zadka.site.co.il  Mon Mar 19 17:00:01 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 19 Mar 2001 18:00:01 +0200
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5F43D.E33B188D@darwin.in-berlin.de>
References: <3AB5F43D.E33B188D@darwin.in-berlin.de>
Message-ID: <E14f24f-0004ny-00@darjeeling>

On Mon, 19 Mar 2001 12:57:49 +0100, Dinu Gherman <gherman at darwin.in-berlin.de> wrote:
> I wrote on comp.lang.python today:
> > 
> > is there a simple way (or any way at all) to find out for 
> > any given hard disk how much free space is left on that
> > device? I looked into the os module, but either not hard
> > enough or there is no such function. Of course, the ideal
> > solution would be platform-independant, too... :)
> 
> Is there any good reason for not having a cross-platform
> solution to this? I'm certainly not the first to ask for
> such a function and it certainly exists for all platforms,
> doesn't it?

No, it doesn't.
Specifically, the information is always unreliable, especially
when you start considering NFS mounted directories and things
like that.

> I know that OS differ in the services they provide, but in
> this case it seems to me that each one *must* have such a 
> function

This doesn't have a *meaning* in UNIX. (In the sense that I can
think of so many special cases, that having a half-working implementation
is worse then nothing)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From gherman at darwin.in-berlin.de  Mon Mar 19 17:06:27 2001
From: gherman at darwin.in-berlin.de (Dinu Gherman)
Date: Mon, 19 Mar 2001 17:06:27 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>  
	            <3AB60A61.A4BB2768@darwin.in-berlin.de> <200103191458.JAA26035@cj20424-a.reston1.va.home.com>
Message-ID: <3AB62E83.ACBDEB3@darwin.in-berlin.de>

Guido van Rossum wrote:
> 
> Yeah, yeah, yeah.  Whine, whine, whine. [...]
> I'm not against adding it, but I'm not particularly motivated 
> to add it myself [...]

Good! After doing a quick research on Google it turns out 
this function is also available on MacOS, as expected, named 
PBHGetVInfo(). See this page for details plus a sample Pascal 
function using it:

  http://developer.apple.com/techpubs/mac/Files/Files-96.html

I'm not sure what else is needed to use it, but at least it's
there and maybe somebody more of a Mac expert than I am could
help out here... I'm going to continue this on c.l.p. in the
original thread... Hey, maybe it is already available in one
of the many mac packages. Well, I'll start some digging...

> I would hope that you agree with me though that the behavior of
> numbers is a lot more fundamental to education than finding out
> available disk space.  The latter is just a system call of use 
> to a small number of professionals.  The former has usability 
> implications for all Python users.

I do agree, sort of, but it appears that often there is much 
more work being spent on fantastic new features, where im-
proving existing ones would also be very beneficial. For me
at least, there is considerable value in a system's consisten-
cy and completeness and not only in its number of features.

Thanks everybody (now that Guido has spoken we have to finish)! 
It was fun! :)

Regards,

Dinu

-- 
Dinu C. Gherman
ReportLab Consultant - http://www.reportlab.com
................................................................
"The only possible values [for quality] are 'excellent' and 'in-
sanely excellent', depending on whether lives are at stake or 
not. Otherwise you don't enjoy your work, you don't work well, 
and the project goes down the drain." 
                    (Kent Beck, "Extreme Programming Explained")



From guido at digicool.com  Mon Mar 19 17:32:33 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 19 Mar 2001 11:32:33 -0500
Subject: [Python-Dev] Python T-shirts
Message-ID: <200103191632.LAA26632@cj20424-a.reston1.va.home.com>

At the conference we handed out T-shirts with the slogan on the back
"Python: programming the way Guido indented it".  We've been asked if
there are any left.  Well, we gave them all away, but we're ordering
more.  You can get them for $10 + S+H.  Write to Melissa Light
<melissa at digicool.com>.  Be nice to her!

--Guido van Rossum (home page: http://www.python.org/~guido/)




From nas at arctrix.com  Mon Mar 19 17:45:35 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 08:45:35 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB62A91.1DBE7F8B@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 04:49:37PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com>
Message-ID: <20010319084534.A18938@glacier.fnational.com>

On Mon, Mar 19, 2001 at 04:49:37PM +0100, Christian Tismer wrote:
> A possible bug may be in frame_resume, where you are doing
> +       f->f_back = tstate->frame;
> without taking care of the prior value of f_back.

Good catch.  There is also a bug when f_suspendvalue is being set
(Py_XDECREF should be called first).

[Christian on disallowing resume on frame already running]
> 1) introduce a lock flag for frames which are currently
>    executed by some interpreter on the C stack. This is
>    what Stackless does currently.
>    Maybe you can just use your new f_suspendvalue field.
>    frame_resume must check that this value is not NULL
>    on entry, and set it zero before resuming.

Another good catch.  It would be easy to set f_stackbottom to
NULL at the top of PyEval_EvalFrame.  resume already checks this
to decide if the frame is resumable.

> 2) Do not expose the resume and suspend methods to the
>    Python user, and recode Generator.py as an extension
>    module in C. This should prevent abuse of frames.

I like the frame methods.  However, this may be a good idea since
Jython may implement things quite differently.

> Proposal for a different interface:
> I would change the interface of PyEval_EvalFrame
> to accept a return value passed in, like Stackless
> has its "passed_retval", and maybe another variable
> that explicitly tells the kind of the frame call,
> i.e. passing the desired why_code. This also would
> make it easier to cope with the other needs of Stackless
> later in a cleaner way.
> Well, I see you are clearing the f_suspendvalue later.
> Maybe just adding the why_code to the parameters
> would do. f_suspendvalue can be used for different
> things, it can also become the place to store a return
> value, or a coroutine transfer parameter.
> 
> In the future, there will not obly be the suspend/resume
> interface. Frames will be called for different reasons:
> suspend  with a value  (generators)
> return   with a value  (normal function calls)
> transfer with a value  (coroutines)
> transfer with no value (microthreads)

The interface needs some work and I'm happy to change it to
better accommodate stackless.  f_suspendvalue and f_stackbottom
are pretty ugly, IMO.  One unexpected benefit: with
PyEval_EvalFrame split out of eval_code2 the interpreter is 5%
faster on my machine.  I suspect the compiler has an easier time
optimizing the loop in the smaller function.

BTW, where is this stackless light patch I've been hearing about?
I would be interested to look at it.  Thanks for your comments.

  Neil



From tismer at tismer.com  Mon Mar 19 17:58:46 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 17:58:46 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com>
Message-ID: <3AB63AC6.4799C73@tismer.com>


Neil Schemenauer wrote:
...
> > 2) Do not expose the resume and suspend methods to the
> >    Python user, and recode Generator.py as an extension
> >    module in C. This should prevent abuse of frames.
> 
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

Maybe a good reason. Exposing frame methods is nice
to play with. Finally, you will want the hard coded
generators. The same thing is happening with Stackless
now. I have a different spelling for frames :-) but
they have to vanish now.

[immature pre-pre-pre-interface]
> The interface needs some work and I'm happy to change it to
> better accommodate stackless.  f_suspendvalue and f_stackbottom
> are pretty ugly, IMO.  One unexpected benefit: with
> PyEval_EvalFrame split out of eval_code2 the interpreter is 5%
> faster on my machine.  I suspect the compiler has an easier time
> optimizing the loop in the smaller function.

Really!? I thought you told about a speed loss?

> BTW, where is this stackless light patch I've been hearing about?
> I would be interested to look at it.  Thanks for your comments.

It does not exist at all. It is just an idea, and
were are looking for somebody who can implement it.
At the moment, we have a PEP (thanks to Gordon), but
there is no specification of StackLite.
I believe PEPs are a good idea.
In this special case, I'd recomment to try to write
a StackLite, and then write the PEP :-)

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From mal at lemburg.com  Mon Mar 19 17:07:10 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Mon, 19 Mar 2001 17:07:10 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF>
Message-ID: <3AB62EAE.FCFD7C9F@lemburg.com>

Fredrik Lundh wrote:
> 
> dinu wrote:
> > Well, this is the usual "If you need it, do it yourself!"
> > answer, that bites the one who dares to speak up for all
> > those hundreds who don't... isn't it?
> 
> fwiw, Python already supports this for real Unix platforms:
> 
> >>> os.statvfs("/")
> (8192, 512, 524288, 365788, 348336, 600556, 598516, 598516, 0, 255)
> 
> here, the root disk holds 524288x512 bytes, with 348336x512
> bytes free for the current user, and 365788x512 bytes available
> for root.
> 
> (the statvfs module contains indices for accessing this "struct")
> 
> Implementing a small subset of statvfs for Windows wouldn't
> be that hard (possibly returning None for fields that don't make
> sense, or are too hard to figure out).
> 
> (and with win32all, I'm sure it can be done without any C code).

It seems that all we need is Jack to port this to the Mac
and we have a working API here :-)

Let's do it...

import sys,os

try:
    os.statvfs

except KeyError:
    # Win32 implementation...
    # Mac implementation...
    pass

else:
    import statvfs
    
    def freespace(path):
        """ freespace(path) -> integer
        Return the number of bytes available to the user on the file system
        pointed to by path."""
        s = os.statvfs(path)
        return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

if __name__=='__main__':
    path = sys.argv[1]
    print 'Free space on %s: %i kB (%i bytes)' % (path,
                                                  freespace(path) / 1024,
                                                  freespace(path))

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From pedroni at inf.ethz.ch  Mon Mar 19 18:08:41 2001
From: pedroni at inf.ethz.ch (Samuele Pedroni)
Date: Mon, 19 Mar 2001 18:08:41 +0100 (MET)
Subject: [Python-Dev] Simple generators, round 2
Message-ID: <200103191708.SAA09258@core.inf.ethz.ch>

Hi.

> > 2) Do not expose the resume and suspend methods to the
> >    Python user, and recode Generator.py as an extension
> >    module in C. This should prevent abuse of frames.
> 
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

I should repeat this: (if we want to avoid threads for implementing
generators because for them that's really an overkill, especially
if those are used in tight loops): jython codebase have following 
limitations:

- suspensions point should be known at compilation time
 (we produce jvm bytecode, that should be instrumented
  to allow restart at a given point). The only other solution
  is to compile a method with a big switch that have a case
  for every python line, which is quite expensive.
  
- a suspension point can at most do a return, it cannot go up 
  more than a single frame even if it just want to discard them.
  Maybe there is a workaroung to this using exceptions, but they
  are expensive and again an overkill for a tight loop.

=> we can support  something like a supsend keyword. The rest is pain :-( .

regards.




From nas at arctrix.com  Mon Mar 19 18:21:59 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:21:59 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB63AC6.4799C73@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 05:58:46PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com>
Message-ID: <20010319092159.B19071@glacier.fnational.com>

[Neil]
> One unexpected benefit: with PyEval_EvalFrame split out of
> eval_code2 the interpreter is 5% faster on my machine.  I
> suspect the compiler has an easier time optimizing the loop in
> the smaller function.

[Christian]
> Really!? I thought you told about a speed loss?

You must be referring to an earlier post I made.  That was purely
speculation.  I didn't time things until the weekend.  Also, the
5% speedup is base on the refactoring of eval_code2 with the
added generator bits.  I wouldn't put much weight on the apparent
speedup either.  Its probably slower on other platforms.

  Neil



From tismer at tismer.com  Mon Mar 19 18:25:43 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 18:25:43 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com>
Message-ID: <3AB64117.8D3AEBED@tismer.com>


Neil Schemenauer wrote:
> 
> [Neil]
> > One unexpected benefit: with PyEval_EvalFrame split out of
> > eval_code2 the interpreter is 5% faster on my machine.  I
> > suspect the compiler has an easier time optimizing the loop in
> > the smaller function.
> 
> [Christian]
> > Really!? I thought you told about a speed loss?
> 
> You must be referring to an earlier post I made.  That was purely
> speculation.  I didn't time things until the weekend.  Also, the
> 5% speedup is base on the refactoring of eval_code2 with the
> added generator bits.  I wouldn't put much weight on the apparent
> speedup either.  Its probably slower on other platforms.

Nevermind. I believe this is going to be the best possible
efficient implementation of generators.
And I'm very confident that it will make it into the
core with ease and without the need for a PEP.

congrats - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From nas at arctrix.com  Mon Mar 19 18:27:33 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:27:33 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010319092159.B19071@glacier.fnational.com>; from nas@arctrix.com on Mon, Mar 19, 2001 at 09:21:59AM -0800
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com>
Message-ID: <20010319092733.C19071@glacier.fnational.com>

On Mon, Mar 19, 2001 at 09:21:59AM -0800, Neil Schemenauer wrote:
> Also, the 5% speedup is base on the refactoring of eval_code2
> with the added generator bits.

Ugh, that should say "based on the refactoring of eval_code2
WITHOUT the generator bits".

  engage-fingers-before-brain-ly y'rs Neil




From nas at arctrix.com  Mon Mar 19 18:38:44 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 19 Mar 2001 09:38:44 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <3AB64117.8D3AEBED@tismer.com>; from tismer@tismer.com on Mon, Mar 19, 2001 at 06:25:43PM +0100
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com> <3AB64117.8D3AEBED@tismer.com>
Message-ID: <20010319093844.D19071@glacier.fnational.com>

On Mon, Mar 19, 2001 at 06:25:43PM +0100, Christian Tismer wrote:
> I believe this is going to be the best possible efficient
> implementation of generators.  And I'm very confident that it
> will make it into the core with ease and without the need for a
> PEP.

I sure hope not.  We need to come up with better APIs and a
better interface from Python code.  The current interface is not
efficiently implementable in Jython, AFAIK.  We also need to
figure out how to make things play nicely with stackless.  IMHO,
a PEP is required.

My plan now is to look at how stackless works as I now understand
some of the issues.  Since no stackless light patch exists
writing one may be a good learning project.  Its still a long
road to 2.2. :-)

  Neil



From tismer at tismer.com  Mon Mar 19 18:43:20 2001
From: tismer at tismer.com (Christian Tismer)
Date: Mon, 19 Mar 2001 18:43:20 +0100
Subject: [Python-Dev] Simple generators, round 2
References: <20010317181741.B12195@glacier.fnational.com> <3AB62A91.1DBE7F8B@tismer.com> <20010319084534.A18938@glacier.fnational.com> <3AB63AC6.4799C73@tismer.com> <20010319092159.B19071@glacier.fnational.com> <3AB64117.8D3AEBED@tismer.com> <20010319093844.D19071@glacier.fnational.com>
Message-ID: <3AB64538.15522433@tismer.com>


Neil Schemenauer wrote:
> 
> On Mon, Mar 19, 2001 at 06:25:43PM +0100, Christian Tismer wrote:
> > I believe this is going to be the best possible efficient
> > implementation of generators.  And I'm very confident that it
> > will make it into the core with ease and without the need for a
> > PEP.
> 
> I sure hope not.  We need to come up with better APIs and a
> better interface from Python code.  The current interface is not
> efficiently implementable in Jython, AFAIK.  We also need to
> figure out how to make things play nicely with stackless.  IMHO,
> a PEP is required.

Yes, sure. What I meant was not the current code, but the
simplistic, straightforward approach.

> My plan now is to look at how stackless works as I now understand
> some of the issues.  Since no stackless light patch exists
> writing one may be a good learning project.  Its still a long
> road to 2.2. :-)

Warning, *unreadable* code. If you really want to read that,
make sure to use ceval_pre.c, this comes almost without optimization.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From paulp at ActiveState.com  Mon Mar 19 18:55:36 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 19 Mar 2001 09:55:36 -0800
Subject: [Python-Dev] nondist/sandbox/typecheck
Message-ID: <3AB64818.DA458342@ActiveState.com>

Could I check in some type-checking code into nondist/sandbox? It's
quickly getting to the point where real users can start to see benefits
from it and I would like to let people play with it to convince
themselves of that.

Consider these mistaken statements:

os.path.abspath(None)
xmllib.XMLParser().feed(None)
sre.compile(".*", "I")

Here's what we used to get as tracebacks:

	os.path.abspath(None)
	(no error, any falsible value is treated as the same as the empty
string!)

	xmllib.XMLParser().feed(None)

Traceback (most recent call last):
  File "errors.py", line 8, in ?
    xmllib.XMLParser().feed(None)
  File "c:\python20\lib\xmllib.py", line 164, in feed
    self.rawdata = self.rawdata + data
TypeError: cannot add type "None" to string

	sre.compile(".*", "I")

Traceback (most recent call last):
  File "errors.py", line 12, in ?
    sre.compile(".*", "I")
  File "c:\python20\lib\sre.py", line 62, in compile
    return _compile(pattern, flags)
  File "c:\python20\lib\sre.py", line 100, in _compile
    p = sre_compile.compile(pattern, flags)
  File "c:\python20\lib\sre_compile.py", line 359, in compile
    p = sre_parse.parse(p, flags)
  File "c:\python20\lib\sre_parse.py", line 586, in parse
    p = _parse_sub(source, pattern, 0)
  File "c:\python20\lib\sre_parse.py", line 294, in _parse_sub
    items.append(_parse(source, state))
  File "c:\python20\lib\sre_parse.py", line 357, in _parse
    if state.flags & SRE_FLAG_VERBOSE:
TypeError: bad operand type(s) for &

====================

Here's what we get now:

	os.path.abspath(None)

Traceback (most recent call last):
  File "errors.py", line 4, in ?
    os.path.abspath(None)
  File "ntpath.py", line 401, in abspath
    def abspath(path):
InterfaceError: Parameter 'path' expected Unicode or 8-bit string.
Instead it got 'None' (None)

	xmllib.XMLParser().feed(None)

Traceback (most recent call last):
  File "errors.py", line 8, in ?
    xmllib.XMLParser().feed(None)
  File "xmllib.py", line 163, in feed
    def feed(self, data):
InterfaceError: Parameter 'data' expected Unicode or 8-bit string.
Instead it got 'None' (None)

	sre.compile(".*", "I")

Traceback (most recent call last):
  File "errors.py", line 12, in ?
    sre.compile(".*", "I")
  File "sre.py", line 61, in compile
    def compile(pattern, flags=0):
InterfaceError: Parameter 'flags' expected None.
Instead it got 'string' ('I')
None

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From ping at lfw.org  Mon Mar 19 22:07:10 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Mon, 19 Mar 2001 13:07:10 -0800 (PST)
Subject: [Python-Dev] Nested scopes core dump
Message-ID: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>

I just tried this:

    Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
    [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> from __future__ import nested_scopes
    >>> def f(x):
    ...     x = x + 1
    ...     a = x + 3
    ...     b = x + 5
    ...     def g(y):
    ...         def h(z):
    ...             return a, b, x, y, z
    ...         return h
    ...     return g
    ...
    Fatal Python error: non-string found in code slot
    Aborted (core dumped)

gdb says v is NULL:

    #5  0x8059cce in PyCode_New (argcount=1, nlocals=2, stacksize=5, flags=3, code=0x8144688, consts=0x8145c1c, names=0x8122974, varnames=0x8145c6c, freevars=0x80ecc14, cellvars=0x81225d4, filename=0x812f900, name=0x810c288, firstlineno=5, lnotab=0x8144af0) at Python/compile.c:279
    279             intern_strings(freevars);
    (gdb) down
    #4  0x8059b80 in intern_strings (tuple=0x80ecc14) at Python/compile.c:233
    233                             Py_FatalError("non-string found in code slot");
    (gdb) list 230
    225     static int
    226     intern_strings(PyObject *tuple)
    227     {
    228             int i;
    229
    230             for (i = PyTuple_GET_SIZE(tuple); --i >= 0; ) {
    231                     PyObject *v = PyTuple_GET_ITEM(tuple, i);
    232                     if (v == NULL || !PyString_Check(v)) {
    233                             Py_FatalError("non-string found in code slot");
    234                             PyErr_BadInternalCall();
    (gdb) print v
    $1 = (PyObject *) 0x0

Hope this helps (this test should probably be added to test_scope.py too),


-- ?!ng

Happiness comes more from loving than being loved; and often when our
affection seems wounded it is is only our vanity bleeding. To love, and
to be hurt often, and to love again--this is the brave and happy life.
    -- J. E. Buchrose 




From jeremy at alum.mit.edu  Mon Mar 19 22:09:30 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 16:09:30 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
Message-ID: <15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>

Please submit bug reports as SF bug reports.  (Thanks for finding it,
but if I don't get to it today this email does me little good.)

Jeremy



From MarkH at ActiveState.com  Mon Mar 19 22:53:29 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 20 Mar 2001 08:53:29 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <09c001c0b06d$0f359eb0$8119fea9@neil>
Message-ID: <LCEPIIGDJPKCOIHOBJEPMEEJDGAA.MarkH@ActiveState.com>

Hi Neil!

>    The "program files" and "user" directory should still have names

"should" or "will"?

> representable in the normal locale used by the user so they are able to
> access them by using their standard encoding in a Python narrow character
> string to the open function.

I dont understand what "their standard encoding" is here.  My understanding
is that "their standard encoding" is whatever WideCharToMultiByte() returns,
and this is what mbcs is.

My understanding is that their "default encoding" will bear no relationship
to encoding names as known by Python.  ie, given a user's locale, there is
no reasonable way to determine which of the Python encoding names will
always correctly work on these strings.

> > The way I see it, to fix this we have 2 basic choices when a Unicode
> object
> > is passed as a filename:
> > * we call the Unicode versions of the CRTL.
>
>    This is by far the better approach IMO as it is more general and will
> work for people who switch locales or who want to access files created by
> others using other locales. Although you can always use the horrid mangled
> "*~1" names.
>
> > * we auto-encode using the "mbcs" encoding, and still call the
> non-Unicode
> > versions of the CRTL.
>
>    This will improve things but to a lesser extent than the above. May be
> the best possible on 95.

I understand the above, but want to resist having different NT and 9x
versions of Python for obvious reasons.  I also wanted to avoid determining
at runtime if the platform has Unicode support and magically switching to
them.

I concur on the "may be the best possible on 95" and see no real downsides
on NT, other than the freak possibility of the default encoding being change
_between_ us encoding a string and the OS decoding it.

Recall that my change is only to convert from Unicode to a string so the
file system can convert back to Unicode.  There is no real opportunity for
the current locale to change on this thread during this process.

I guess I see 3 options:

1) Do nothing, thereby forcing the user to manually encode the Unicode
object.  Only by encoding the string can they access these filenames, which
means the exact same issues apply.

2) Move to Unicode APIs where available, which will be a much deeper patch
and much harder to get right on non-Unicode Windows platforms.

3) Like 1, but simply automate the encoding task.

My proposal was to do (3).  It is not clear from your mail what you propose.
Like me, you seem to agree (2) would be perfect in an ideal world, but you
also agree we don't live in one.

What is your recommendation?

Mark.




From skip at pobox.com  Mon Mar 19 22:53:56 2001
From: skip at pobox.com (Skip Montanaro)
Date: Mon, 19 Mar 2001 15:53:56 -0600 (CST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
	<15030.30090.898715.282761@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15030.32756.969347.565911@beluga.mojam.com>

    Jeremy> Please submit bug reports as SF bug reports.  (Thanks for
    Jeremy> finding it, but if I don't get to it today this email does me
    Jeremy> little good.)

What?  You actually delete email?  Or do you have an email system that works
like Usenet? 

;-)

S





From nhodgson at bigpond.net.au  Mon Mar 19 23:52:34 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Tue, 20 Mar 2001 09:52:34 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPMEEJDGAA.MarkH@ActiveState.com>
Message-ID: <02e401c0b0c7$4a38a2a0$8119fea9@neil>

   Morning Mark,


> >    The "program files" and "user" directory should still have names
>
> "should" or "will"?

   Should. I originally wrote "will" but then thought of the scenario where
I install W2K with Russian as the default locale. The "Program Files"
directory (and other standard directories) is created with a localised name
(call it, "Russian PF" for now) including some characters not representable
in Latin 1. I then start working with a Python program and decide to change
the input locale to German. The "Russian PF" string is representable in
Unicode but not in the code page used for German so a WideCharToMultiByte
using the current code page will fail. Fail here means not that the function
will error but that a string will be constructed which will not round trip
back to Unicode and thus is unlikely to be usable to open the file.

> > representable in the normal locale used by the user so they are able to
> > access them by using their standard encoding in a Python narrow
character
> > string to the open function.
>
> I dont understand what "their standard encoding" is here.  My
understanding
> is that "their standard encoding" is whatever WideCharToMultiByte()
returns,
> and this is what mbcs is.

    WideCharToMultiByte has an explicit code page parameter so its the
caller that has to know what they want. The most common thing to do is ask
the system for the input locale and use this in the call to
WideCharToMultiByte and there are some CRT functions like wcstombs that wrap
this. Passing CP_THREAD_ACP to WideCharToMultiByte is another way. Scintilla
uses:

static int InputCodePage() {
 HKL inputLocale = ::GetKeyboardLayout(0);
 LANGID inputLang = LOWORD(inputLocale);
 char sCodePage[10];
 int res = ::GetLocaleInfo(MAKELCID(inputLang, SORT_DEFAULT),
   LOCALE_IDEFAULTANSICODEPAGE, sCodePage, sizeof(sCodePage));
 if (!res)
  return 0;
 return atoi(sCodePage);
}

   which is the result of reading various articles from MSDN and MSJ.
microsoft.public.win32.programmer.international is the news group for this
and Michael Kaplan answers a lot of these sorts of questions.

> My understanding is that their "default encoding" will bear no
relationship
> to encoding names as known by Python.  ie, given a user's locale, there is
> no reasonable way to determine which of the Python encoding names will
> always correctly work on these strings.

   Uncertain. There should be a way to get the input locale as a Python
encoding name or working on these sorts of issues will be difficult.

> Recall that my change is only to convert from Unicode to a string so the
> file system can convert back to Unicode.  There is no real opportunity for
> the current locale to change on this thread during this process.

   But the Unicode string may be non-representable using the current locale.
So doing the conversion makes the string unusable.

> My proposal was to do (3).  It is not clear from your mail what you
propose.
> Like me, you seem to agree (2) would be perfect in an ideal world, but you
> also agree we don't live in one.

   I'd prefer (2). Support Unicode well on the platforms that support it
well. Providing some help on 95 is nice but not IMO as important.

   Neil





From mwh21 at cam.ac.uk  Tue Mar 20 00:14:08 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 19 Mar 2001 23:14:08 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Ka-Ping Yee's message of "Mon, 19 Mar 2001 13:07:10 -0800 (PST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
Message-ID: <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>

Ka-Ping Yee <ping at lfw.org> writes:

> I just tried this:
> 
>     Python 2.1b1 (#15, Mar 16 2001, 04:31:43) 
>     [GCC egcs-2.91.66 19990314/Linux (egcs-1.1.2 release)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> from __future__ import nested_scopes
>     >>> def f(x):
>     ...     x = x + 1
>     ...     a = x + 3
>     ...     b = x + 5
>     ...     def g(y):
>     ...         def h(z):
>     ...             return a, b, x, y, z
>     ...         return h
>     ...     return g
>     ...
>     Fatal Python error: non-string found in code slot
>     Aborted (core dumped)

Here, look at this:

static int
symtable_freevar_offsets(PyObject *freevars, int offset)
{
      PyObject *name, *v;
      int pos;

      /* The cell vars are the first elements of the closure,
         followed by the free vars.  Update the offsets in
         c_freevars to account for number of cellvars. */  
      pos = 0;
      while (PyDict_Next(freevars, &pos, &name, &v)) {
              int i = PyInt_AS_LONG(v) + offset;
              PyObject *o = PyInt_FromLong(i);
              if (o == NULL)
                      return -1;
              if (PyDict_SetItem(freevars, name, o) < 0) {
                      Py_DECREF(o);
                      return -1;
              }
              Py_DECREF(o);
      }
      return 0;
}

this modifies the dictionary you're iterating over.  This is, as they
say, a Bad Idea[*].

https://sourceforge.net/tracker/index.php?func=detail&aid=409864&group_id=5470&atid=305470

is a minimal-effort/impact fix.  I don't know the new compile.c well
enough to really judge the best fix.

Cheers,
M.

[*] I thought that if you used the same keys when you were iterating
    over a dict you were safe.  It seems not, at least as far as I
    could tell with mounds of debugging printf's.
-- 
  (Of course SML does have its weaknesses, but by comparison, a
  discussion of C++'s strengths and flaws always sounds like an
  argument about whether one should face north or east when one
  is sacrificing one's goat to the rain god.)         -- Thant Tessman




From jeremy at alum.mit.edu  Tue Mar 20 00:17:30 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 18:17:30 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
	<m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:

  MWH> [*] I thought that if you used the same keys when you were
  MWH> iterating over a dict you were safe.  It seems not, at least as
  MWH> far as I could tell with mounds of debugging printf's.

I did, too.  Anyone know what the problems is?  

Jeremy



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 20 00:16:34 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 20 Mar 2001 00:16:34 +0100
Subject: [Python-Dev] Unicode and the Windows file system.
Message-ID: <200103192316.f2JNGYK02041@mira.informatik.hu-berlin.de>

> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
>
> * we call the Unicode versions of the CRTL.

That is the choice that I prefer. I understand that it won't work on
Win95, but I think that needs to be worked-around.

By using "Unicode versions" of an API, you are making the code
Windows-specific anyway. So I wonder whether it might be better to use
the plain API instead of the CRTL; I also wonder how difficult it
actually is to do "the right thing all the time".

On NT, the file system is defined in terms of Unicode, so passing
Unicode in and out is definitely the right thing (*). On Win9x, the
file system uses some platform specific encoding, which means that
using that encoding is the right thing. On Unix, there is no
established convention, but UTF-8 was invented exactly to deal with
Unicode in Unix file systems, so that might be appropriate choice
(**).

So I'm in favour of supporting Unicode on all file system APIs; that
does include os.listdir(). For 2.1, that may be a bit much given that
a beta release has already been seen; so only accepting Unicode on
input is what we can do now.

Regards,
Martin

(*) Converting to the current MBCS might be lossy, and it might not
support all file names. The "ASCII only" approach of 2.0 was precisely
taken to allow getting it right later; I strongly discourage any
approach that attempts to drop the restriction in a way that does not
allow to get it right later.

(**) Atleast, that is the best bet. Many Unix installations use some
other encoding in their file names; if Unicode becomes more common,
most likely installations will also use UTF-8 on their file systems.
Unless it can be established what the file system encoding is,
returning Unicode from os.listdir is probably not the right thing.



From mwh21 at cam.ac.uk  Tue Mar 20 00:44:11 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 19 Mar 2001 23:44:11 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Jeremy Hylton's message of "Mon, 19 Mar 2001 18:17:30 -0500 (EST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>

Jeremy Hylton <jeremy at alum.mit.edu> writes:

> >>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
> 
>   MWH> [*] I thought that if you used the same keys when you were
>   MWH> iterating over a dict you were safe.  It seems not, at least as
>   MWH> far as I could tell with mounds of debugging printf's.
> 
> I did, too.  Anyone know what the problems is?  

The dict's resizing, it turns out.

I note that in PyDict_SetItem, the check to see if the dict needs
resizing occurs *before* it is known whether the key is already in the
dict.  But if this is the problem, how come we haven't been bitten by
this before?

Cheers,
M.

-- 
  While preceding your entrance with a grenade is a good tactic in
  Quake, it can lead to problems if attempted at work.    -- C Hacking
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html




From jeremy at alum.mit.edu  Tue Mar 20 00:48:42 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 19 Mar 2001 18:48:42 -0500 (EST)
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org>
	<m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk>
	<15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net>
	<m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MH" == Michael Hudson <mwh21 at cam.ac.uk> writes:

  MH> Jeremy Hylton <jeremy at alum.mit.edu> writes:
  >> >>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
  >>
  MWH> [*] I thought that if you used the same keys when you were
  MWH> iterating over a dict you were safe.  It seems not, at least as
  MWH> far as I could tell with mounds of debugging printf's.
  >>
  >> I did, too.  Anyone know what the problems is?

  MH> The dict's resizing, it turns out.

So a hack to make the iteration safe would be to assign and element
and then delete it?

  MH> I note that in PyDict_SetItem, the check to see if the dict
  MH> needs resizing occurs *before* it is known whether the key is
  MH> already in the dict.  But if this is the problem, how come we
  MH> haven't been bitten by this before?

It's probably unusual for a dictionary to be in this state when the
compiler decides to update the values.

Jeremy



From MarkH at ActiveState.com  Tue Mar 20 00:57:21 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Tue, 20 Mar 2001 10:57:21 +1100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
In-Reply-To: <200103192316.f2JNGYK02041@mira.informatik.hu-berlin.de>
Message-ID: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>

OK - it appears everyone agrees we should go the "Unicode API" route.  I
actually thought my scheme did not preclude moving to this later.

This is a much bigger can of worms than I have bandwidth to take on at the
moment.  As Martin mentions, what will os.listdir() return on Win9x vs
Win2k?  What does passing a Unicode object to a non-Unicode Win32 platform
mean? etc.  How do Win95/98/ME differ in their Unicode support?  Do the
various service packs for each of these change the basic support?

So unfortunately this simply means the status quo remains until someone
_does_ have the time and inclination.  That may well be me in the future,
but is not now.  It also means that until then, Python programmers will
struggle with this and determine that they can make it work simply by
encoding the Unicode as an "mbcs" string.  Or worse, they will note that
"latin1 seems to work" and use that even though it will work "less often"
than mbcs.  I was simply hoping to automate that encoding using a scheme
that works "most often".

The biggest drawback is that by doing nothing we are _encouraging_ the user
to write broken code.  The way things stand at the moment, the users will
_never_ pass Unicode objects to these APIs (as they dont work) and will
therefore manually encode a string.  To my mind this is _worse_ than what my
scheme proposes - at least my scheme allows Unicode objects to be passed to
the Python functions - python may choose to change the way it handles these
in the future.  But by forcing the user to encode a string we have lost
_all_ meaningful information about the Unicode object and can only hope they
got the encoding right.

If anyone else decides to take this on, please let me know.  However, I fear
that in a couple of years we may still be waiting and in the meantime people
will be coding hacks that will _not_ work in the new scheme.

c'est-la-vie-ly,

Mark.




From mwh21 at cam.ac.uk  Tue Mar 20 01:02:59 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 00:02:59 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Jeremy Hylton's message of "Mon, 19 Mar 2001 18:48:42 -0500 (EST)"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk>

Jeremy Hylton <jeremy at alum.mit.edu> writes:

> >>>>> "MH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
> 
>   MH> Jeremy Hylton <jeremy at alum.mit.edu> writes:
>   >> >>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:
>   >>
>   MWH> [*] I thought that if you used the same keys when you were
>   MWH> iterating over a dict you were safe.  It seems not, at least as
>   MWH> far as I could tell with mounds of debugging printf's.
>   >>
>   >> I did, too.  Anyone know what the problems is?
> 
>   MH> The dict's resizing, it turns out.
> 
> So a hack to make the iteration safe would be to assign and element
> and then delete it?

Yes.  This would be gross beyond belief though.  Particularly as the
normal case is for freevars to be empty.

>   MH> I note that in PyDict_SetItem, the check to see if the dict
>   MH> needs resizing occurs *before* it is known whether the key is
>   MH> already in the dict.  But if this is the problem, how come we
>   MH> haven't been bitten by this before?
> 
> It's probably unusual for a dictionary to be in this state when the
> compiler decides to update the values.

What I meant was that there are bits and pieces of code in the Python
core that blithely pass keys gotten from PyDict_Next into
PyDict_SetItem.  From what I've just learnt, I'd expect this to
occasionally cause glitches of extreme confusing-ness.  Though on
investigation, I don't think any of these bits of code are sensitive
to getting keys out multiple times (which is what happens in this case
- though you must be able to miss keys too).  Might cause the odd leak
here and there.

Cheers,
M.

-- 
  Clue: You've got the appropriate amount of hostility for the
  Monastery, however you are metaphorically getting out of the
  safari jeep and kicking the lions.                         -- coonec
               -- http://home.xnet.com/~raven/Sysadmin/ASR.Quotes.html




From greg at cosc.canterbury.ac.nz  Tue Mar 20 01:19:35 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 20 Mar 2001 12:19:35 +1200 (NZST)
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <3AB5FCE5.92A133AB@lemburg.com>
Message-ID: <200103200019.MAA06253@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal at lemburg.com>:

> Actually opening a file in record mode and then using
> file.seek() should work on many platforms.

Not on Unix! No space is actually allocated until you
write something, regardless of where you seek to. And
then only the blocks that you touch (files can have
holes in them).

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Tue Mar 20 01:21:47 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 20 Mar 2001 12:21:47 +1200 (NZST)
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
In-Reply-To: <3AB60DAB.D92D12BF@tismer.com>
Message-ID: <200103200021.MAA06256@s454.cosc.canterbury.ac.nz>

Christian Tismer <tismer at tismer.com>:

> It does not
> matter how and where frames were created, it is just impossible
> to jump at a frame that is held by an interpreter on the C stack.

I think I need a clearer idea of what it means for a frame
to be "held by an interpreter".

I gather that each frame has a lock flag. How and when does
this flag get set and cleared?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Tue Mar 20 02:48:27 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 19 Mar 2001 20:48:27 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <20010319141834.X27808@xs4all.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEMHJGAA.tim.one@home.com>

Here's a radical suggestion:  Start a x-platform project on SourceForge,
devoted to producing a C library with a common interface for
platform-dependent crud like "how big is this file?" and "how many bytes free
on this disk?" and "how can I execute a shell command in a portable way?"
(e.g., Tcl's "exec" emulates a subset of Bourne shell syntax, including
redirection and pipes, even on Windows 3.1).

OK, that's too useful.  Nevermind ...




From tismer at tismer.com  Tue Mar 20 06:15:01 2001
From: tismer at tismer.com (Christian Tismer)
Date: Tue, 20 Mar 2001 06:15:01 +0100
Subject: [Python-Dev] Re: [Stackless] comments on PEP 219
References: <200103200021.MAA06256@s454.cosc.canterbury.ac.nz>
Message-ID: <3AB6E755.B39C2E62@tismer.com>


Greg Ewing wrote:
> 
> Christian Tismer <tismer at tismer.com>:
> 
> > It does not
> > matter how and where frames were created, it is just impossible
> > to jump at a frame that is held by an interpreter on the C stack.
> 
> I think I need a clearer idea of what it means for a frame
> to be "held by an interpreter".
> 
> I gather that each frame has a lock flag. How and when does
> this flag get set and cleared?

Assume a frame F being executed by an interpreter A.
Now, if this frame calls a function, which in turn
starts another interpreter B, this hides interpreter
A on the C stack. Frame F cannot be run by anything
until interpreter B is finished.
Exactly in this situation, frame F has its lock set,
to prevend crashes.
Such a locked frame cannot be a switch target.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From barry at digicool.com  Tue Mar 20 06:12:17 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Tue, 20 Mar 2001 00:12:17 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>
Message-ID: <15030.59057.866982.538935@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at python.org> writes:

    GvR> So I see little chance for PEP 224.  Maybe I should just
    GvR> pronounce on this, and declare the PEP rejected.

So, was that a BDFL pronouncement or not? :)

-Barry



From tim_one at email.msn.com  Tue Mar 20 06:57:23 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 20 Mar 2001 00:57:23 -0500
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <200103191312.IAA25747@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGENHJGAA.tim_one@email.msn.com>

[Mark Hammond]
> * os.listdir() returns '\xe0test\xf2' for this file.

[Guido]
> I don't understand.  This is a Latin-1 string.  Can you explain again
> how the MBCS encoding encodes characters outside the Latin-1 range?

I expect this is a coincidence.  MBCS is a generic term for a large number of
distinct variable-length encoding schemes, one or more specific to each
language.  Latin-1 is a subset of some MBCS schemes, but not of others; Mark
was using a German mblocale, right?  Across MS's set of MBCS schemes, there's
little consistency:  a one-byte encoding in one of them may well be a "lead
byte" (== the first byte of a two-byte encoding) in another.

All this stuff is hidden under layers of macros so general that, if you code
it right, you can switch between compiling MBCS code on Win95 and Unicode
code on NT via setting one compiler #define.  Or that's what they advertise.
The multi-lingual Windows app developers at my previous employer were all
bald despite being no older than 23 <wink>.

ascii-boy-ly y'rs  - tim




From tim_one at email.msn.com  Tue Mar 20 07:31:49 2001
From: tim_one at email.msn.com (Tim Peters)
Date: Tue, 20 Mar 2001 01:31:49 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010319084534.A18938@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>

[Neil Schemenauer]
> I like the frame methods.  However, this may be a good idea since
> Jython may implement things quite differently.

Note that the "compare fringes of two trees" example is a classic not because
it's inherently interesting, but because it distills the essence of a
particular *class* of problem (that's why it's popular with academics).

In Icon you need to create co-expressions to solve this problem, because its
generators aren't explicitly resumable, and Icon has no way to spell "kick a
pair of generators in lockstep".  But explicitly resumable generators are in
fact "good enough" for this classic example, which is usually used to
motivate coroutines.

I expect this relates to the XLST/XSLT/whatever-the-heck-it-was example:  if
Paul thought iterators were the bee's knees there, I *bet* in glorious
ignorance that iterators implemented via Icon-style generators would be the
bee's pajamas.

Of course Christian is right that you have to prevent a suspended frame from
getting activated more than once simultaneously; but that's detectable, and
should be considered a programmer error if it happens.




From fredrik at pythonware.com  Tue Mar 20 08:00:51 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 08:00:51 +0100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>
Message-ID: <003a01c0b10b$80e6a650$e46940d5@hagrid>

Mark Hammond wrote:
> OK - it appears everyone agrees we should go the "Unicode API" route.

well, I'd rather play with a minimal (mbcs) patch now, than wait another
year or so for a full unicodification, so if you have the time...

Cheers /F




From tim.one at home.com  Tue Mar 20 08:08:53 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 02:08:53 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: <200103190709.AAA10053@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCMENKJGAA.tim.one@home.com>

[Uche Ogbuji]
> Quite interesting.  I brought up this *exact* point at the
> Stackless BOF at IPC9.  I mentioned that the immediate reason
> I was interested in Stackless was to supercharge the efficiency
> of 4XSLT.  I think that a stackless 4XSLT could pretty much
> annihilate the other processors in the field for performance.

Hmm.  I'm interested in clarifying the cost/performance boundaries of the
various approaches.  I don't understand XSLT (I don't even know what it is).
Do you grok the difference between full-blown Stackless and Icon-style
generators?  The correspondent I quoted believed the latter were on-target
for XSLT work, and given the way Python works today generators are easier to
implement than full-blown Stackless.  But while I can speak with some
confidence about the latter, I don't know whether they're sufficient for what
you have in mind.

If this is some flavor of one-at-time tree-traversal algorithm, generators
should suffice.

class TreeNode:
    # with self.value
    #      self.children, a list of TreeNode objects
    ...
    def generate_kids(self):  # pre-order traversal
        suspend self.value
        for kid in self.children:
            for itskids in kid.generate_kids():
                suspend itskids

for k in someTreeNodeObject.generate_kids():
    print k

So the control-flow is thoroughly natural, but you can only suspend to your
immediate invoker (in recursive traversals, this "walks up the chain" of
generators for each result).  With explicitly resumable generator objects,
multiple trees (or even general graphs -- doesn't much matter) can be
traversed in lockstep (or any other interleaving that's desired).

Now decide <wink>.





From fredrik at pythonware.com  Tue Mar 20 08:36:59 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 08:36:59 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <LNBBLJKPBEHFEDALKOLCMEMHJGAA.tim.one@home.com>
Message-ID: <017a01c0b110$8d132890$e46940d5@hagrid>

tim wrote:
> Here's a radical suggestion:  Start a x-platform project on SourceForge,
> devoted to producing a C library with a common interface for
> platform-dependent crud like "how big is this file?" and "how many bytes free
> on this disk?" and "how can I execute a shell command in a portable way?"
> (e.g., Tcl's "exec" emulates a subset of Bourne shell syntax, including
> redirection and pipes, even on Windows 3.1).

counter-suggestion:

add partial os.statvfs emulation to the posix module for Windows
(and Mac), and write helpers for shutil to do the fancy stuff you
mentioned before.

Cheers /F




From tim.one at home.com  Tue Mar 20 09:30:18 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 03:30:18 -0500
Subject: [Python-Dev] Function in os module for available disk space, why not?
In-Reply-To: <017a01c0b110$8d132890$e46940d5@hagrid>
Message-ID: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com>

[Fredrik Lundh]
> counter-suggestion:
>
> add partial os.statvfs emulation to the posix module for Windows
> (and Mac), and write helpers for shutil to do the fancy stuff you
> mentioned before.

One of the best things Python ever did was to introduce os.path.getsize() +
friends, saving the bulk of the world from needing to wrestle with the
obscure Unix stat() API.  os.chmod() is another x-platform teachability pain;
if there's anything worth knowing in the bowels of statvfs(), let's please
spell it in a human-friendly way from the start.




From fredrik at effbot.org  Tue Mar 20 09:58:53 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Tue, 20 Mar 2001 09:58:53 +0100
Subject: [Python-Dev] Function in os module for available disk space, why not?
References: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com>
Message-ID: <01ec01c0b11b$ff9593c0$e46940d5@hagrid>

Tim Peters wrote:
> One of the best things Python ever did was to introduce os.path.getsize() +
> friends, saving the bulk of the world from needing to wrestle with the
> obscure Unix stat() API.

yup (I remember lobbying for those years ago), but that doesn't
mean that we cannot make already existing low-level APIs work
on as many platforms as possible...

(just like os.popen etc)

adding os.statvfs for windows is pretty much a bug fix (for 2.1?),
but adding a new API is not (2.2).

> os.chmod() is another x-platform teachability pain

shutil.chmod("file", "g+x"), anyone?

> if there's anything worth knowing in the bowels of statvfs(), let's
> please spell it in a human-friendly way from the start.

how about os.path.getfreespace("path") and
os.path.gettotalspace("path") ?

Cheers /F




From fredrik at pythonware.com  Tue Mar 20 13:07:23 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 13:07:23 +0100
Subject: [Python-Dev] sys.prefix woes
Message-ID: <04e601c0b136$52ee8e90$0900a8c0@SPIFF>

(windows, 2.0)

it looks like sys.prefix isn't set unless 1) PYTHONHOME is set, or
2) lib/os.py can be found somewhere between the directory your
executable is found in, and the root.

if neither is set, the path is taken from the registry, but sys.prefix
is left blank, and FixTk.py no longer works.

any ideas?  is this a bug?  is there an "official" workaround that
doesn't involve using the time machine to upgrade all BeOpen
and ActiveState kits?

Cheers /F




From guido at digicool.com  Tue Mar 20 13:48:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 07:48:09 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 00:02:59 GMT."
             <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> 
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net>  
            <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103201248.HAA29485@cj20424-a.reston1.va.home.com>

> >   MH> The dict's resizing, it turns out.
> > 
> > So a hack to make the iteration safe would be to assign and element
> > and then delete it?
> 
> Yes.  This would be gross beyond belief though.  Particularly as the
> normal case is for freevars to be empty.
> 
> >   MH> I note that in PyDict_SetItem, the check to see if the dict
> >   MH> needs resizing occurs *before* it is known whether the key is
> >   MH> already in the dict.  But if this is the problem, how come we
> >   MH> haven't been bitten by this before?
> > 
> > It's probably unusual for a dictionary to be in this state when the
> > compiler decides to update the values.
> 
> What I meant was that there are bits and pieces of code in the Python
> core that blithely pass keys gotten from PyDict_Next into
> PyDict_SetItem.

Where?

> From what I've just learnt, I'd expect this to
> occasionally cause glitches of extreme confusing-ness.  Though on
> investigation, I don't think any of these bits of code are sensitive
> to getting keys out multiple times (which is what happens in this case
> - though you must be able to miss keys too).  Might cause the odd leak
> here and there.

I'd fix the dict implementation, except that that's tricky.

Checking for a dup key in PyDict_SetItem() before calling dictresize()
slows things down.  Checking in insertdict() is wrong because
dictresize() uses that!

Jeremy, is there a way that you could fix your code to work around
this?  Let's talk about this when you get into the office.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 20 14:03:42 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 08:03:42 -0500
Subject: [Python-Dev] Re: What has become of PEP224 ?
In-Reply-To: Your message of "Tue, 20 Mar 2001 00:12:17 EST."
             <15030.59057.866982.538935@anthem.wooz.org> 
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com>  
            <15030.59057.866982.538935@anthem.wooz.org> 
Message-ID: <200103201303.IAA29601@cj20424-a.reston1.va.home.com>

> >>>>> "GvR" == Guido van Rossum <guido at python.org> writes:
> 
>     GvR> So I see little chance for PEP 224.  Maybe I should just
>     GvR> pronounce on this, and declare the PEP rejected.
> 
> So, was that a BDFL pronouncement or not? :)
> 
> -Barry

Yes it was.  I really don't like the syntax, the binding between the
docstring and the documented identifier is too weak.  It's best to do
this explicitly, e.g.

    a = 12*12
    __doc_a__ = """gross"""

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Tue Mar 20 14:30:10 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 13:30:10 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Guido van Rossum's message of "Tue, 20 Mar 2001 07:48:09 -0500"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>
Message-ID: <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> > >   MH> The dict's resizing, it turns out.
> > > 
> > > So a hack to make the iteration safe would be to assign and element
> > > and then delete it?
> > 
> > Yes.  This would be gross beyond belief though.  Particularly as the
> > normal case is for freevars to be empty.
> > 
> > >   MH> I note that in PyDict_SetItem, the check to see if the dict
> > >   MH> needs resizing occurs *before* it is known whether the key is
> > >   MH> already in the dict.  But if this is the problem, how come we
> > >   MH> haven't been bitten by this before?
> > > 
> > > It's probably unusual for a dictionary to be in this state when the
> > > compiler decides to update the values.
> > 
> > What I meant was that there are bits and pieces of code in the Python
> > core that blithely pass keys gotten from PyDict_Next into
> > PyDict_SetItem.
> 
> Where?

import.c:PyImport_Cleanup
moduleobject.c:_PyModule_Clear

Hrm, I was sure there were more than that, but there don't seem to be.
Sorry for the alarmism.

> > From what I've just learnt, I'd expect this to
> > occasionally cause glitches of extreme confusing-ness.  Though on
> > investigation, I don't think any of these bits of code are sensitive
> > to getting keys out multiple times (which is what happens in this case
> > - though you must be able to miss keys too).  Might cause the odd leak
> > here and there.
> 
> I'd fix the dict implementation, except that that's tricky.

I'd got that far...

> Checking for a dup key in PyDict_SetItem() before calling dictresize()
> slows things down.  Checking in insertdict() is wrong because
> dictresize() uses that!

Maybe you could do the check for resize *after* the call to
insertdict?  I think that would work, but I wouldn't like to go
messing with such a performance critical bit of code without some
careful thinking.

Cheers,
M.

-- 
  You sound surprised.  We're talking about a government department
  here - they have procedures, not intelligence.
                                            -- Ben Hutchings, cam.misc




From mwh21 at cam.ac.uk  Tue Mar 20 14:44:50 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 13:44:50 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Michael Hudson's message of "20 Mar 2001 13:30:10 +0000"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com> <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <m3ae6gh7vx.fsf@atrus.jesus.cam.ac.uk>

Michael Hudson <mwh21 at cam.ac.uk> writes:

> Guido van Rossum <guido at digicool.com> writes:
> 
> > Checking for a dup key in PyDict_SetItem() before calling dictresize()
> > slows things down.  Checking in insertdict() is wrong because
> > dictresize() uses that!
> 
> Maybe you could do the check for resize *after* the call to
> insertdict?  I think that would work, but I wouldn't like to go
> messing with such a performance critical bit of code without some
> careful thinking.

Indeed; this tiny little patch:

Index: Objects/dictobject.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Objects/dictobject.c,v
retrieving revision 2.73
diff -c -r2.73 dictobject.c
*** Objects/dictobject.c	2001/01/18 00:39:02	2.73
--- Objects/dictobject.c	2001/03/20 13:38:04
***************
*** 496,501 ****
--- 496,508 ----
  	Py_INCREF(value);
  	Py_INCREF(key);
  	insertdict(mp, key, hash, value);
+ 	/* if fill >= 2/3 size, double in size */
+ 	if (mp->ma_fill*3 >= mp->ma_size*2) {
+ 		if (dictresize(mp, mp->ma_used*2) != 0) {
+ 			if (mp->ma_fill+1 > mp->ma_size)
+ 				return -1;
+ 		}
+ 	}
  	return 0;
  }
  
fixes Ping's reported crash.  You can't naively (as I did at first)
*only* check after the insertdict, 'cause dicts are created with 0
size.

Currently building from scratch to do some performance testing.

Cheers,
M.

-- 
  It's a measure of how much I love Python that I moved to VA, where
  if things don't work out Guido will buy a plantation and put us to
  work harvesting peanuts instead.     -- Tim Peters, comp.lang.python




From fredrik at pythonware.com  Tue Mar 20 14:58:29 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 14:58:29 +0100
Subject: [Python-Dev] sys.prefix woes
References: <04e601c0b136$52ee8e90$0900a8c0@SPIFF>
Message-ID: <054e01c0b145$d9d727f0$0900a8c0@SPIFF>

I wrote:
> any ideas?  is this a bug?  is there an "official" workaround that
> doesn't involve using the time machine to upgrade all BeOpen
> and ActiveState kits?

I found a workaround (a place to put some app-specific python code
that runs before anyone actually attempts to use sys.prefix)

still looks like a bug, though.  I'll post it to sourceforge.

Cheers /F




From guido at digicool.com  Tue Mar 20 15:32:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 09:32:00 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 13:30:10 GMT."
             <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>  
            <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103201432.JAA00360@cj20424-a.reston1.va.home.com>

> > Checking for a dup key in PyDict_SetItem() before calling dictresize()
> > slows things down.  Checking in insertdict() is wrong because
> > dictresize() uses that!
> 
> Maybe you could do the check for resize *after* the call to
> insertdict?  I think that would work, but I wouldn't like to go
> messing with such a performance critical bit of code without some
> careful thinking.

No, that could still decide to resize, couldn't it?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Tue Mar 20 15:33:20 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 09:33:20 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Your message of "20 Mar 2001 13:30:10 GMT."
             <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com>  
            <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103201433.JAA00373@cj20424-a.reston1.va.home.com>

Ah, the solution is simple.  Check for identical keys only when about
to resize:

	/* if fill >= 2/3 size, double in size */
	if (mp->ma_fill*3 >= mp->ma_size*2) {
		***** test here *****
		if (dictresize(mp, mp->ma_used*2) != 0) {
			if (mp->ma_fill+1 > mp->ma_size)
				return -1;
		}
	}

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Tue Mar 20 16:13:35 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 20 Mar 2001 15:13:35 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: Guido van Rossum's message of "Tue, 20 Mar 2001 09:33:20 -0500"
References: <Pine.LNX.4.10.10103191257320.4368-100000@skuld.kingmanhall.org> <m3pufdgxmn.fsf@atrus.jesus.cam.ac.uk> <15030.37770.770573.891669@w221.z064000254.bwi-md.dsl.cnc.net> <m3n1ahgw8k.fsf@atrus.jesus.cam.ac.uk> <15030.39642.784846.545571@w221.z064000254.bwi-md.dsl.cnc.net> <m3k85lgvd8.fsf@atrus.jesus.cam.ac.uk> <200103201248.HAA29485@cj20424-a.reston1.va.home.com> <m3d7bch8kd.fsf@atrus.jesus.cam.ac.uk> <200103201433.JAA00373@cj20424-a.reston1.va.home.com>
Message-ID: <m34rwoh3s0.fsf@atrus.jesus.cam.ac.uk>

Does anyone know how to reply to two messages gracefully in gnus?

Guido van Rossum <guido at digicool.com> writes:

> > Maybe you could do the check for resize *after* the call to
> > insertdict?  I think that would work, but I wouldn't like to go
> > messing with such a performance critical bit of code without some
> > careful thinking.
>
> No, that could still decide to resize, couldn't it?

Yes, but not when you're inserting on a key that is already in the
dictionary - because the resize would have happened when the key was
inserted into the dictionary, and thus the problem we're seeing here
wouldn't happen.

What's happening in Ping's test case is that the dict is in some sense
being prepped to resize when an item is added but not actually
resizing until PyDict_SetItem is called again, which is unfortunately
inside a PyDict_Next loop.

Guido van Rossum <guido at digicool.com> writes:

> Ah, the solution is simple.  Check for identical keys only when about
> to resize:
> 
> 	/* if fill >= 2/3 size, double in size */
> 	if (mp->ma_fill*3 >= mp->ma_size*2) {
> 		***** test here *****
> 		if (dictresize(mp, mp->ma_used*2) != 0) {
> 			if (mp->ma_fill+1 > mp->ma_size)
> 				return -1;
> 		}
> 	}

This might also do nasty things to performance - this code path gets
travelled fairly often for small dicts.

Does anybody know the average (mean/mode/median) size for dicts in
a "typical" python program?

  -------

Using mal's pybench with and without the patch I posted shows a 0.30%
slowdown, including these interesting lines:

                  DictCreation:    1662.80 ms   11.09 us  +34.23%
        SimpleDictManipulation:     764.50 ms    2.55 us  -15.67%

DictCreation repeatedly creates dicts of size 0 and 3.
SimpleDictManipulation repeatedly adds six elements to a dict and then
deletes them again.

Dicts of size 3 are likely to be the worst case wrt. my patch; without
it, they will have a ma_fill of 3 and a ma_size of 4 (but calling
PyDict_SetItem again will trigger a resize - this is what happens in
Ping's example), but with my patch they will always have an ma_fill of
3 and a ma_size of 8.  Hence why the DictCreation is so much worse,
and why I asked the question about average dict sizes.

Mind you, 6 is a similar edge case, so I don't know why
SimpleDictManipulation does better.  Maybe something to do with
collisions or memory behaviour.

Cheers,
M.

-- 
  I don't remember any dirty green trousers.
                                             -- Ian Jackson, ucam.chat




From skip at pobox.com  Tue Mar 20 16:19:54 2001
From: skip at pobox.com (Skip Montanaro)
Date: Tue, 20 Mar 2001 09:19:54 -0600 (CST)
Subject: [Python-Dev] zipfile.py - detect if zipinfo is a dir  (fwd)
Message-ID: <15031.29978.95112.488244@beluga.mojam.com>

Not sure why I received this note.  I am passing it along to Jim Ahlstrom
and python-dev.

Skip

-------------- next part --------------
An embedded message was scrubbed...
From: Stephane Matamontero <dev1.gemodek at t-online.de>
Subject: zipfile.py - detect if zipinfo is a dir 
Date: Tue, 20 Mar 2001 06:39:27 -0800
Size: 2485
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010320/5070f250/attachment-0001.eml>

From tim.one at home.com  Tue Mar 20 17:01:21 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 11:01:21 -0500
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: <m34rwoh3s0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEONJGAA.tim.one@home.com>

[Michael Hudson]
>>> Maybe you could do the check for resize *after* the call to
>>> insertdict?  I think that would work, but I wouldn't like to go
>>> messing with such a performance critical bit of code without some
>>> careful thinking.

[Guido]
>> No, that could still decide to resize, couldn't it?

[Michael]
> Yes, but not when you're inserting on a key that is already in the
> dictionary - because the resize would have happened when the key was
> inserted into the dictionary, and thus the problem we're seeing here
> wouldn't happen.

Careful:  this comment is only half the truth:

	/* if fill >= 2/3 size, double in size */

The dictresize following is also how dicts *shrink*.  That is, build up a
dict, delete a whole bunch of keys, and nothing at all happens to the size
until you call setitem again (actually, I think you need to call it more than
once -- the behavior is tricky).  In any case, that a key is already in the
dict does not guarantee that a dict won't resize (via shrinking) when doing a
setitem.

We could bite the bullet and add a new PyDict_AdjustSize function, just
duplicating the resize logic.  Then loops that know they won't be changing
the size can call that before starting.  Delicate, though.




From jim at interet.com  Tue Mar 20 18:42:11 2001
From: jim at interet.com (James C. Ahlstrom)
Date: Tue, 20 Mar 2001 12:42:11 -0500
Subject: [Python-Dev] Re: zipfile.py - detect if zipinfo is a dir  (fwd)
References: <15031.29978.95112.488244@beluga.mojam.com>
Message-ID: <3AB79673.C29C0BBE@interet.com>

Skip Montanaro wrote:
> 
> Not sure why I received this note.  I am passing it along to Jim Ahlstrom
> and python-dev.

Thanks.  I will look into it.

JimA



From fredrik at pythonware.com  Tue Mar 20 20:20:38 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Tue, 20 Mar 2001 20:20:38 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF> <3AB62EAE.FCFD7C9F@lemburg.com>
Message-ID: <048401c0b172$dd6892a0$e46940d5@hagrid>

mal wrote:

>         return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

F_FRAVAIL, not F_BAVAIL

(and my plan is to make a statvfs subset available on
all platforms, which makes your code even simpler...)

Cheers /F




From jack at oratrix.nl  Tue Mar 20 21:34:51 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 20 Mar 2001 21:34:51 +0100
Subject: [Python-Dev] Test for case-sensitive imports?
Message-ID: <20010320203457.3A72EEA11D@oratrix.oratrix.nl>

Hmm, apparently the flurry of changes to the case-checking code in
import has broken the case-checks for the macintosh. I'll fix that,
but maybe we should add a testcase for case-sensitive import?

And a related point: the logic for determining whether to use a
mac-specific, windows-specific or unix-specific routine in the getpass 
module is error prone.

Why these two points are related is left as an exercise to the reader:-)
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From jack at oratrix.nl  Tue Mar 20 21:47:37 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Tue, 20 Mar 2001 21:47:37 +0100
Subject: [Python-Dev] test_coercion failing
Message-ID: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>

Test_coercion fails on the Mac (current CVS sources) with
We expected (repr): '(1+0j)'
But instead we got: '(1-0j)'
test test_coercion failed -- Writing: '(1-0j)', expected: '(1+0j)'

The computation it was doing was "2 / (2+0j) =".

To my mathematical eye it shouldn't be complaining in the first place, 
but I assume this may be either a missing round() somewhere or a
symptom of a genuine bug.

Can anyone point me in the right direction?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From guido at digicool.com  Tue Mar 20 22:00:26 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 16:00:26 -0500
Subject: [Python-Dev] Test for case-sensitive imports?
In-Reply-To: Your message of "Tue, 20 Mar 2001 21:34:51 +0100."
             <20010320203457.3A72EEA11D@oratrix.oratrix.nl> 
References: <20010320203457.3A72EEA11D@oratrix.oratrix.nl> 
Message-ID: <200103202100.QAA01606@cj20424-a.reston1.va.home.com>

> Hmm, apparently the flurry of changes to the case-checking code in
> import has broken the case-checks for the macintosh. I'll fix that,
> but maybe we should add a testcase for case-sensitive import?

Thanks -- yes, please add a testcase!  ("import String" should do it,
right? :-)

> And a related point: the logic for determining whether to use a
> mac-specific, windows-specific or unix-specific routine in the getpass 
> module is error prone.

Can you fix that too?

> Why these two points are related is left as an exercise to the reader:-)

:-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Tue Mar 20 22:03:40 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Tue, 20 Mar 2001 22:03:40 +0100
Subject: [Python-Dev] Function in os module for available disk space, why 
 not?
References: <LNBBLJKPBEHFEDALKOLCKENPJGAA.tim.one@home.com> <01ec01c0b11b$ff9593c0$e46940d5@hagrid>
Message-ID: <3AB7C5AC.DE61F186@lemburg.com>

Fredrik Lundh wrote:
> 
> Tim Peters wrote:
> > One of the best things Python ever did was to introduce os.path.getsize() +
> > friends, saving the bulk of the world from needing to wrestle with the
> > obscure Unix stat() API.
> 
> yup (I remember lobbying for those years ago), but that doesn't
> mean that we cannot make already existing low-level APIs work
> on as many platforms as possible...
> 
> (just like os.popen etc)
> 
> adding os.statvfs for windows is pretty much a bug fix (for 2.1?),
> but adding a new API is not (2.2).
> 
> > os.chmod() is another x-platform teachability pain
> 
> shutil.chmod("file", "g+x"), anyone?

Wasn't shutil declared obsolete ?
 
> > if there's anything worth knowing in the bowels of statvfs(), let's
> > please spell it in a human-friendly way from the start.
> 
> how about os.path.getfreespace("path") and
> os.path.gettotalspace("path") ?

Anybody care to add the missing parts in:

import sys,os

try:
    os.statvfs

except AttributeError:
    # Win32 implementation...
    # Mac implementation...
    pass

else:
    import statvfs

    def freespace(path):
        """ freespace(path) -> integer
        Return the number of bytes available to the user on the file system
        pointed to by path."""
        s = os.statvfs(path)
        return s[statvfs.F_BAVAIL] * long(s[statvfs.F_BSIZE])

if __name__=='__main__':
    path = sys.argv[1]
    print 'Free space on %s: %i kB (%i bytes)' % (path,
                                                  freespace(path) / 1024,
                                                  freespace(path))


totalspace() should be just as easy to add and I'm pretty
sure that you can get that information on *all* platforms
(not necessarily using the same APIs though).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at digicool.com  Tue Mar 20 22:16:32 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 20 Mar 2001 16:16:32 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: Your message of "Tue, 20 Mar 2001 21:47:37 +0100."
             <20010320204742.BC08AEA11D@oratrix.oratrix.nl> 
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> 
Message-ID: <200103202116.QAA01770@cj20424-a.reston1.va.home.com>

> Test_coercion fails on the Mac (current CVS sources) with
> We expected (repr): '(1+0j)'
> But instead we got: '(1-0j)'
> test test_coercion failed -- Writing: '(1-0j)', expected: '(1+0j)'
> 
> The computation it was doing was "2 / (2+0j) =".
> 
> To my mathematical eye it shouldn't be complaining in the first place, 
> but I assume this may be either a missing round() somewhere or a
> symptom of a genuine bug.
> 
> Can anyone point me in the right direction?

Tim admits that he changed complex division and repr().  So that's
where you might want to look.  If you wait a bit, Tim will check his
algorithm to see if a "minus zero" can pop out of it.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at rahul.net  Tue Mar 20 22:38:27 2001
From: aahz at rahul.net (Aahz Maruch)
Date: Tue, 20 Mar 2001 13:38:27 -0800 (PST)
Subject: [Python-Dev] Function in os module for available disk space, why
In-Reply-To: <3AB7C5AC.DE61F186@lemburg.com> from "M.-A. Lemburg" at Mar 20, 2001 10:03:40 PM
Message-ID: <20010320213828.2D30F99C80@waltz.rahul.net>

M.-A. Lemburg wrote:
> 
> Wasn't shutil declared obsolete ?

<blink>  What?!
-- 
                      --- Aahz (@pobox.com)

Hugs and backrubs -- I break Rule 6             http://www.rahul.net/aahz
Androgynous poly kinky vanilla queer het

I don't really mind a person having the last whine, but I do mind
someone else having the last self-righteous whine.



From paul at pfdubois.com  Wed Mar 21 00:56:06 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Tue, 20 Mar 2001 15:56:06 -0800
Subject: [Python-Dev] PEP 242 Released
Message-ID: <ADEOIFHFONCLEEPKCACCGEANCHAA.paul@pfdubois.com>

PEP: 242
Title: Numeric Kinds
Version: $Revision: 1.1 $
Author: paul at pfdubois.com (Paul F. Dubois)
Status: Draft
Type: Standards Track
Created: 17-Mar-2001
Python-Version: 2.2
Post-History:


Abstract

    This proposal gives the user optional control over the precision
    and range of numeric computations so that a computation can be
    written once and run anywhere with at least the desired precision
    and range.  It is backward compatible with existing code.  The
    meaning of decimal literals is clarified.


Rationale

    Currently it is impossible in every language except Fortran 90 to
    write a program in a portable way that uses floating point and
    gets roughly the same answer regardless of platform -- or refuses
    to compile if that is not possible.  Python currently has only one
    floating point type, equal to a C double in the C implementation.

    No type exists corresponding to single or quad floats.  It would
    complicate the language to try to introduce such types directly
    and their subsequent use would not be portable.  This proposal is
    similar to the Fortran 90 "kind" solution, adapted to the Python
    environment.  With this facility an entire calculation can be
    switched from one level of precision to another by changing a
    single line.  If the desired precision does not exist on a
    particular machine, the program will fail rather than get the
    wrong answer.  Since coding in this style would involve an early
    call to the routine that will fail, this is the next best thing to
    not compiling.


Supported Kinds

    Each Python compiler may define as many "kinds" of integer and
    floating point numbers as it likes, except that it must support at
    least two kinds of integer corresponding to the existing int and
    long, and must support at least one kind of floating point number,
    equivalent to the present float.  The range and precision of the
    these kinds are processor dependent, as at present, except for the
    "long integer" kind, which can hold an arbitrary integer.  The
    built-in functions int(), float(), long() and complex() convert
    inputs to these default kinds as they do at present.  (Note that a
    Unicode string is actually a different "kind" of string and that a
    sufficiently knowledgeable person might be able to expand this PEP
    to cover that case.)

    Within each type (integer, floating, and complex) the compiler
    supports a linearly-ordered set of kinds, with the ordering
    determined by the ability to hold numbers of an increased range
    and/or precision.


Kind Objects

    Three new standard functions are defined in a module named
    "kinds".  They return callable objects called kind objects.  Each
    int or floating kind object f has the signature result = f(x), and
    each complex kind object has the signature result = f(x, y=0.).

    int_kind(n)
        For n >= 1, return a callable object whose result is an
        integer kind that will hold an integer number in the open
        interval (-10**n,10**n).  This function always succeeds, since
        it can return the 'long' kind if it has to. The kind object
        accepts arguments that are integers including longs.  If n ==
        0, returns the kind object corresponding to long.

    float_kind(nd, n)
        For nd >= 0 and n >= 1, return a callable object whose result
        is a floating point kind that will hold a floating-point
        number with at least nd digits of precision and a base-10
        exponent in the open interval (-n, n).  The kind object
        accepts arguments that are integer or real.

    complex_kind(nd, n)
        Return a callable object whose result is a complex kind that
        will will hold a complex number each of whose components
        (.real, .imag) is of kind float_kind(nd, n).  The kind object
        will accept one argument that is integer, real, or complex, or
        two arguments, each integer or real.

    The compiler will return a kind object corresponding to the least
    of its available set of kinds for that type that has the desired
    properties.  If no kind with the desired qualities exists in a
    given implementation an OverflowError exception is thrown.  A kind
    function converts its argument to the target kind, but if the
    result does not fit in the target kind's range, an OverflowError
    exception is thrown.

    Kind objects also accept a string argument for conversion of
    literal notation to their kind.

    Besides their callable behavior, kind objects have attributes
    giving the traits of the kind in question.  The list of traits
    needs to be completed.


The Meaning of Literal Values

    Literal integer values without a trailing L are of the least
    integer kind required to represent them.  An integer literal with
    a trailing L is a long.  Literal decimal values are of the
    greatest available binary floating-point kind.


Concerning Infinite Floating Precision

    This section makes no proposals and can be omitted from
    consideration.  It is for illuminating an intentionally
    unimplemented 'corner' of the design.

    This PEP does not propose the creation of an infinite precision
    floating point type, just leaves room for it.  Just as int_kind(0)
    returns the long kind object, if in the future an infinitely
    precise decimal kind is available, float_kind(0,0) could return a
    function that converts to that type.  Since such a kind function
    accepts string arguments, programs could then be written that are
    completely precise.  Perhaps in analogy to r'a raw string', 1.3r
    might be available as syntactic sugar for calling the infinite
    floating kind object with argument '1.3'.  r could be thought of
    as meaning 'rational'.


Complex numbers and kinds

    Complex numbers are always pairs of floating-point numbers with
    the same kind.  A Python compiler must support a complex analog of
    each floating point kind it supports, if it supports complex
    numbers at all.


Coercion

    In an expression, coercion between different kinds is to the
    greater kind.  For this purpose, all complex kinds are "greater
    than" all floating-point kinds, and all floating-point kinds are
    "greater than" all integer kinds.


Examples

    In module myprecision.py:

        import kinds
        tinyint = kinds.int_kind(1)
        single = kinds.float_kind(6, 90)
        double = kinds.float_kind(15, 300)
        csingle = kinds.complex_kind(6, 90)

    In the rest of my code:

        from myprecision import tinyint, single, double, csingle
        n = tinyint(3)
        x = double(1.e20)
        z = 1.2
        # builtin float gets you the default float kind, properties unknown
        w = x * float(x)
        w = x * double(z)
        u = csingle(x + z * 1.0j)
        u2 = csingle(x+z, 1.0)

    Note how that entire code can then be changed to a higher
    precision by changing the arguments in myprecision.py.

    Comment: note that you aren't promised that single != double; but
    you are promised that double(1.e20) will hold a number with 15
    decimal digits of precision and a range up to 10**300 or that the
    float_kind call will fail.


Open Issues

    The assertion that a decimal literal means a binary floating-point
    value of the largest available kind is in conflict with other
    proposals about Python's numeric model.  This PEP asserts that
    these other proposals are wrong and that part of them should not
    be implemented.

    Determine the exact list of traits for integer and floating point
    numbers.  There are some standard Fortran routines that do this
    but I have to track them down.  Also there should be information
    sufficient to create a Numeric array of an equal or greater kind.


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:




From biotechstox23 at excite.com  Tue Mar 20 18:09:52 2001
From: biotechstox23 at excite.com (biotechstox23 at excite.com)
Date: Tue, 20 Mar 2001 18:09:52
Subject: [Python-Dev] FREE Biotech Stock Info!    933
Message-ID: <309.140226.543818@excite.com>

An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/python-dev/attachments/20010320/f699b97f/attachment-0001.htm>

From tim.one at home.com  Wed Mar 21 04:33:15 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 22:33:15 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>

Everyone!  Run this program under current CVS:

x = 0.0
print "%.17g" % -x
print "%+.17g" % -x

What do you get?  WinTel prints "0" for the first and "+0" for the second.

C89 doesn't define the results.

C99 requires "-0" for both (on boxes with signed floating zeroes, which is
virtually all boxes today due to IEEE 754).

I don't want to argue the C rules, I just want to know whether this *does*
vary across current platforms.




From tim.one at home.com  Wed Mar 21 04:46:04 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 22:46:04 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <200103202116.QAA01770@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBDJHAA.tim.one@home.com>

[Guido]
> ...
> If you wait a bit, Tim will check his algorithm to see if
> a "minus zero" can pop out of it.

I'm afraid Jack will have to work harder than that.  He should have gotten a
minus 0 out of this one if and only if he got a minus 0 before, and under 754
rules he *will* get a minus 0 if and only if he told his 754 hardware to use
its "to minus infinity" rounding mode.

Is test_coercion failing on any platform other than Macintosh?




From tim.one at home.com  Wed Mar 21 05:01:13 2001
From: tim.one at home.com (Tim Peters)
Date: Tue, 20 Mar 2001 23:01:13 -0500
Subject: [Python-Dev] Test for case-sensitive imports?
In-Reply-To: <200103202100.QAA01606@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEBEJHAA.tim.one@home.com>

[ Guido van Rossum]
> Hmm, apparently the flurry of changes to the case-checking code in
> import has broken the case-checks for the macintosh.

Hmm.  This should have been broken way back in 2.1a1, as the code you later
repaired was introduced by the first release of Mac OS X changes.  Try to
stay more current in the future <wink>.

> I'll fix that, but maybe we should add a testcase for
> case-sensitive import?

Yup!  Done now.




From uche.ogbuji at fourthought.com  Wed Mar 21 05:23:01 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Tue, 20 Mar 2001 21:23:01 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from "Tim Peters" <tim.one@home.com> 
   of "Tue, 20 Mar 2001 02:08:53 EST." <LNBBLJKPBEHFEDALKOLCMENKJGAA.tim.one@home.com> 
Message-ID: <200103210423.VAA20300@localhost.localdomain>

> [Uche Ogbuji]
> > Quite interesting.  I brought up this *exact* point at the
> > Stackless BOF at IPC9.  I mentioned that the immediate reason
> > I was interested in Stackless was to supercharge the efficiency
> > of 4XSLT.  I think that a stackless 4XSLT could pretty much
> > annihilate the other processors in the field for performance.
> 
> Hmm.  I'm interested in clarifying the cost/performance boundaries of the
> various approaches.  I don't understand XSLT (I don't even know what it is).
> Do you grok the difference between full-blown Stackless and Icon-style
> generators?

To a decent extent, based on reading your posts carefully.

> The correspondent I quoted believed the latter were on-target
> for XSLT work, and given the way Python works today generators are easier to
> implement than full-blown Stackless.  But while I can speak with some
> confidence about the latter, I don't know whether they're sufficient for what
> you have in mind.

Based on a discussion with Christian at IPC9, they are.  I should have been 
more clear about that.  My main need is to be able to change a bit of context 
and invoke a different execution path, without going through the full overhead 
of a function call.  XSLT, if written "naturally", tends to involve huge 
numbers of such tweak-context-and-branch operations.

> If this is some flavor of one-at-time tree-traversal algorithm, generators
> should suffice.
> 
> class TreeNode:
>     # with self.value
>     #      self.children, a list of TreeNode objects
>     ...
>     def generate_kids(self):  # pre-order traversal
>         suspend self.value
>         for kid in self.children:
>             for itskids in kid.generate_kids():
>                 suspend itskids
> 
> for k in someTreeNodeObject.generate_kids():
>     print k
> 
> So the control-flow is thoroughly natural, but you can only suspend to your
> immediate invoker (in recursive traversals, this "walks up the chain" of
> generators for each result).  With explicitly resumable generator objects,
> multiple trees (or even general graphs -- doesn't much matter) can be
> traversed in lockstep (or any other interleaving that's desired).
> 
> Now decide <wink>.

Suspending only to the invoker should do the trick because it is typically a 
single XSLT instruction that governs multiple tree-operations with varied 
context.

At IPC9, Guido put up a poll of likely use of stackless features, and it was a 
pretty clear arithmetic progression from those who wanted to use microthreads, 
to those who wanted co-routines, to those who wanted just generators.  The 
generator folks were probably 2/3 of the assembly.  Looks as if many have 
decided, and they seem to agree with you.


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From greg at cosc.canterbury.ac.nz  Wed Mar 21 05:49:33 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Wed, 21 Mar 2001 16:49:33 +1200 (NZST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>

>     def generate_kids(self):  # pre-order traversal
>         suspend self.value
>         for kid in self.children:
>             for itskids in kid.generate_kids():
>                 suspend itskids

Can I make a suggestion: If we're going to get this generator
stuff, I think it would read better if the suspending statement
were

   yield x

rather than

   suspend x

because x is not the thing that we are suspending!

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From fdrake at acm.org  Wed Mar 21 05:58:10 2001
From: fdrake at acm.org (Fred L. Drake)
Date: Tue, 20 Mar 2001 23:58:10 -0500
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
Message-ID: <web-1702694@digicool.com>

Greg Ewing <greg at cosc.canterbury.ac.nz> wrote:
 > stuff, I think it would read better if the suspending
 > statement were
 > 
 >    yield x
 > 
 > rather than
 > 
 >    suspend x

  I agree; this really improves readability.  I'm sure
someone knows of precedence for the "suspend" keyword, but
the only one I recall seeing before is "yeild" (Sather).


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations



From nas at arctrix.com  Wed Mar 21 06:04:42 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Tue, 20 Mar 2001 21:04:42 -0800
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>; from tim.one@home.com on Tue, Mar 20, 2001 at 10:33:15PM -0500
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <20010320210442.A22819@glacier.fnational.com>

On Tue, Mar 20, 2001 at 10:33:15PM -0500, Tim Peters wrote:
> Everyone!  Run this program under current CVS:

There are probably lots of Linux testers around but here's what I
get:

    Python 2.1b2 (#2, Mar 20 2001, 23:52:29) 
    [GCC 2.95.3 20010219 (prerelease)] on linux2
    Type "copyright", "credits" or "license" for more information.
    >>> x = 0.0
    >>> print "%.17g" % -x
    -0
    >>> print "%+.17g" % -x
    -0

libc is GNU 2.2.2  (if that matters).  test_coerion works for me
too.  Is test_coerce testing too much accidental implementation
behavior?

  Neil



From ping at lfw.org  Wed Mar 21 07:14:57 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Tue, 20 Mar 2001 22:14:57 -0800 (PST)
Subject: [Python-Dev] Re: Generator syntax
In-Reply-To: <web-1702694@digicool.com>
Message-ID: <Pine.LNX.4.10.10103202213070.4368-100000@skuld.kingmanhall.org>

Greg Ewing <greg at cosc.canterbury.ac.nz> wrote:
> stuff, I think it would read better if the suspending
> statement were
> 
>    yield x
> 
> rather than
> 
>    suspend x

Fred Drake wrote:
>   I agree; this really improves readability.

Indeed, shortly after i wrote my generator examples, i wished i'd
written "generate x" rather than "suspend x".  "yield x" is good too.


-- ?!ng

Happiness comes more from loving than being loved; and often when our
affection seems wounded it is only our vanity bleeding. To love, and
to be hurt often, and to love again--this is the brave and happy life.
    -- J. E. Buchrose 




From tim.one at home.com  Wed Mar 21 08:15:23 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 02:15:23 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <20010320210442.A22819@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEBNJHAA.tim.one@home.com>

[Neil Schemenauer, among others confirming Linux behavior]
> There are probably lots of Linux testers around but here's what I
> get:
>
>     Python 2.1b2 (#2, Mar 20 2001, 23:52:29)
>     [GCC 2.95.3 20010219 (prerelease)] on linux2
>     Type "copyright", "credits" or "license" for more information.
>     >>> x = 0.0
>     >>> print "%.17g" % -x
>     -0
>     >>> print "%+.17g" % -x
>     -0
>
> libc is GNU 2.2.2  (if that matters).

Indeed, libc is probably the *only* that matters (Python defers to the
platform libc for float formatting).

> test_coerion works for me too.  Is test_coerce testing too much
> accidental implementation behavior?

I don't think so.  As a later message said, Jack *should* be getting a minus
0 if and only if he's running on an IEEE-754 box (extremely likely) and set
the rounding mode to minus-infinity (extremely unlikely).

But we don't yet know what the above prints on *his* box, so still don't know
whether that's relevant.

WRT display of signed zeroes (which may or may not have something to do with
Jack's problem), Python obviously varies across platforms.  But there is no
portable way in C89 to determine the sign of a zero, so we either live with
the cross-platform discrepancies, or force zeroes on output to always be
positive (in opposition to what C99 mandates).  (Note that I reject out of
hand that we #ifdef the snot out of the code to be able to detect the sign of
a 0 on various platforms -- Python doesn't conform to any other 754 rules,
and this one is minor.)

Ah, this is coming back to me now:  at Dragon this also popped up in our C++
code.  At least one flavor of Unix there also displayed -0 as if positive.  I
fiddled our output to suppress it, a la

def output(afloat):
    if not afloat:
        afloat *= afloat  # forces -0 and +0 to +0
    print afloat

(but in C++ <wink>).

would-rather-understand-jack's-true-problem-than-cover-up-a-
   symptom-ly y'rs  - tim




From fredrik at effbot.org  Wed Mar 21 08:26:26 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Wed, 21 Mar 2001 08:26:26 +0100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
References: <web-1702694@digicool.com>
Message-ID: <012601c0b1d8$7dc3cc50$e46940d5@hagrid>

the real fred wrote:

> I agree; this really improves readability.  I'm sure someone
> knows of precedence for the "suspend" keyword

Icon

(the suspend keyword "leaves the generating function
in suspension")

> but the only one I recall seeing before is "yeild" (Sather).

I associate "yield" with non-preemptive threading (yield
to anyone else, not necessarily my caller).

Cheers /F




From tim.one at home.com  Wed Mar 21 08:25:42 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 02:25:42 -0500
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>

I also like "yield", but when talking about Icon-style generators to people
who may not be familiar with them, I'll continue to use "suspend" (since
that's the word they'll see in the Icon docs, and they can get many more
examples from the latter than from me).




From tommy at ilm.com  Wed Mar 21 08:27:12 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Tue, 20 Mar 2001 23:27:12 -0800 (PST)
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
	<LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <15032.22433.953503.130175@mace.lucasdigital.com>

I get the same ("0" then "+0") on my irix65 O2.  test_coerce succeeds
as well.


Tim Peters writes:
| Everyone!  Run this program under current CVS:
| 
| x = 0.0
| print "%.17g" % -x
| print "%+.17g" % -x
| 
| What do you get?  WinTel prints "0" for the first and "+0" for the second.
| 
| C89 doesn't define the results.
| 
| C99 requires "-0" for both (on boxes with signed floating zeroes, which is
| virtually all boxes today due to IEEE 754).
| 
| I don't want to argue the C rules, I just want to know whether this *does*
| vary across current platforms.
| 
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://mail.python.org/mailman/listinfo/python-dev



From tommy at ilm.com  Wed Mar 21 08:37:00 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Tue, 20 Mar 2001 23:37:00 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
Message-ID: <15032.22504.605383.113425@mace.lucasdigital.com>

Hey Gang,

Given the latest state of the CVS tree I am getting the following
failures on my irix65 O2 (and have been for quite some time- I'm just
now getting around to reporting them):


------------%< snip %<----------------------%< snip %<------------

test_pty
The actual stdout doesn't match the expected stdout.
This much did match (between asterisk lines):
**********************************************************************
test_pty
**********************************************************************
Then ...
We expected (repr): 'I'
But instead we got: '\n'
test test_pty failed -- Writing: '\n', expected: 'I'


importing test_pty into an interactive interpreter gives this:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import test.test_pty
Calling master_open()
Got master_fd '4', slave_name '/dev/ttyq6'
Calling slave_open('/dev/ttyq6')
Got slave_fd '5'
Writing to slave_fd

I wish to buy a fish license.For my pet fish, Eric.
calling pty.fork()
Waiting for child (16654) to finish.
Child (16654) exited with status 1024.
>>> 

------------%< snip %<----------------------%< snip %<------------

test_symtable
test test_symtable crashed -- exceptions.TypeError: unsubscriptable object


running the code test_symtable code by hand in the interpreter gives
me:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import _symtable
>>> symbols = _symtable.symtable("def f(x): return x", "?", "exec")
>>> symbols
<symtable entry global(0), line 0>
>>> symbols[0]
Traceback (most recent call last):
  File "<stdin>", line 1, in ?
TypeError: unsubscriptable object


------------%< snip %<----------------------%< snip %<------------

test_zlib
make: *** [test] Segmentation fault (core dumped)


when I run python in a debugger and import test_zlib by hand I get
this:

Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
Type "copyright", "credits" or "license" for more information.
>>> import test.test_zlib
0xe5c1a120 0x43b6aa94
0xbd602f7 0xbd602f7
expecting Bad compression level
expecting Invalid initialization option
expecting Invalid initialization option
normal compression/decompression succeeded
compress/decompression obj succeeded
decompress with init options succeeded
decompressobj with init options succeeded

the faliure is on line 86 of test_zlib.py (calling obj.flush()).
here are the relevant portions of the call stack (sorry they're
stripped):

t_delete(<stripped>) ["malloc.c":801]
realfree(<stripped>) ["malloc.c":531]
cleanfree(<stripped>) ["malloc.c":944]
_realloc(<stripped>) ["malloc.c":329]
_PyString_Resize(<stripped>) ["stringobject.c":2433]
PyZlib_flush(<stripped>) ["zlibmodule.c":595]
call_object(<stripped>) ["ceval.c":2706]
...



From mal at lemburg.com  Wed Mar 21 11:02:54 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:02:54 +0100
Subject: [Python-Dev] Function in os module for available disk space, why
References: <20010320213828.2D30F99C80@waltz.rahul.net>
Message-ID: <3AB87C4E.450723C2@lemburg.com>

Aahz Maruch wrote:
> 
> M.-A. Lemburg wrote:
> >
> > Wasn't shutil declared obsolete ?
> 
> <blink>  What?!

Guido once pronounced on this... mostly because of the comment
at the top regarding cross-platform compatibility:

"""Utility functions for copying files and directory trees.

XXX The functions here don't copy the resource fork or other metadata on Mac.

"""

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Wed Mar 21 11:41:38 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:41:38 +0100
Subject: [Python-Dev] Re: What has become of PEP224 ?
References: <03b201c0affc$d9889440$0401000a@reston1.va.home.com> <15030.59057.866982.538935@anthem.wooz.org>
Message-ID: <3AB88562.F6FB0042@lemburg.com>

"Barry A. Warsaw" wrote:
> 
> >>>>> "GvR" == Guido van Rossum <guido at python.org> writes:
> 
>     GvR> So I see little chance for PEP 224.  Maybe I should just
>     GvR> pronounce on this, and declare the PEP rejected.
> 
> So, was that a BDFL pronouncement or not? :)

I guess so. 

I'll add Guido's comments (the ones he mailed me in
private) to the PEP and then forget about the idea of getting
doc-strings to play nice with attributes... :-(

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From mal at lemburg.com  Wed Mar 21 11:46:01 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Wed, 21 Mar 2001 11:46:01 +0100
Subject: [Python-Dev] RE: Unicode and the Windows file system.
References: <LCEPIIGDJPKCOIHOBJEPKEEODGAA.MarkH@ActiveState.com>
Message-ID: <3AB88669.3FDC1DE3@lemburg.com>

Mark Hammond wrote:
> 
> OK - it appears everyone agrees we should go the "Unicode API" route.  I
> actually thought my scheme did not preclude moving to this later.
> 
> This is a much bigger can of worms than I have bandwidth to take on at the
> moment.  As Martin mentions, what will os.listdir() return on Win9x vs
> Win2k?  What does passing a Unicode object to a non-Unicode Win32 platform
> mean? etc.  How do Win95/98/ME differ in their Unicode support?  Do the
> various service packs for each of these change the basic support?
> 
> So unfortunately this simply means the status quo remains until someone
> _does_ have the time and inclination.  That may well be me in the future,
> but is not now.  It also means that until then, Python programmers will
> struggle with this and determine that they can make it work simply by
> encoding the Unicode as an "mbcs" string.  Or worse, they will note that
> "latin1 seems to work" and use that even though it will work "less often"
> than mbcs.  I was simply hoping to automate that encoding using a scheme
> that works "most often".
> 
> The biggest drawback is that by doing nothing we are _encouraging_ the user
> to write broken code.  The way things stand at the moment, the users will
> _never_ pass Unicode objects to these APIs (as they dont work) and will
> therefore manually encode a string.  To my mind this is _worse_ than what my
> scheme proposes - at least my scheme allows Unicode objects to be passed to
> the Python functions - python may choose to change the way it handles these
> in the future.  But by forcing the user to encode a string we have lost
> _all_ meaningful information about the Unicode object and can only hope they
> got the encoding right.
> 
> If anyone else decides to take this on, please let me know.  However, I fear
> that in a couple of years we may still be waiting and in the meantime people
> will be coding hacks that will _not_ work in the new scheme.

Ehm, AFAIR, the Windows CRT APIs can take MBCS character input,
so why don't we go that route first and then later switch on
to full Unicode support ?

After all, I added the "es#" parser markers because you bugged me about
wanting to use them for Windows in the MBCS context -- you even
wrote up the MBCS codec... all this code has to be good for 
something ;-)

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From thomas at xs4all.net  Wed Mar 21 12:08:34 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 21 Mar 2001 12:08:34 +0100
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>; from tim.one@home.com on Tue, Mar 20, 2001 at 10:33:15PM -0500
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl> <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <20010321120833.Q29286@xs4all.nl>

On Tue, Mar 20, 2001 at 10:33:15PM -0500, Tim Peters wrote:
> Everyone!  Run this program under current CVS:

> x = 0.0
> print "%.17g" % -x
> print "%+.17g" % -x

> What do you get?  WinTel prints "0" for the first and "+0" for the second.

On BSDI (both 4.0 (gcc 2.7.2.1) and 4.1 (egcs 1.1.2 (2.91.66)) as well as
FreeBSD 4.2 (gcc 2.95.2):

>>> x = 0.0
>>> print "%.17g" % -x
0
>>> print "%+.17g" % -x
+0

Note that neither use GNU libc even though they use gcc.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Wed Mar 21 12:31:07 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 12:31:07 +0100
Subject: [Python-Dev] Unicode and the Windows file system. 
In-Reply-To: Message by "Mark Hammond" <MarkH@ActiveState.com> ,
	     Mon, 19 Mar 2001 20:40:24 +1100 , <LCEPIIGDJPKCOIHOBJEPKEDFDGAA.MarkH@ActiveState.com> 
Message-ID: <20010321113107.A325B36B2C1@snelboot.oratrix.nl>

> The way I see it, to fix this we have 2 basic choices when a Unicode object
> is passed as a filename:
> * we call the Unicode versions of the CRTL.
> * we auto-encode using the "mbcs" encoding, and still call the non-Unicode
> versions of the CRTL.
> 
> The first option has a problem in that determining what Unicode support
> Windows 95/98 have may be more trouble than it is worth.  Sticking to purely
> ascii versions of the functions means that the worst thing that can happen
> is we get a regular file-system error if an mbcs encoded string is passed on
> a non-Unicode platform.
> 
> Does anyone have any objections to this scheme or see any drawbacks in it?
> If not, I'll knock up a patch...

The Mac has a very similar problem here: unless you go to the unicode APIs 
(which is pretty much impossible for stdio calls and such at the moment) you 
have to use the "current" 8-bit encoding for filenames.

Could you put your patch in such a shape that it could easily be adapted for 
other platforms? Something like PyOS_8BitFilenameFromUnicodeObject(PyObject *, 
char *, int) or so?
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From tismer at tismer.com  Wed Mar 21 13:52:05 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 21 Mar 2001 13:52:05 +0100
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <3AB8A3F5.D79F7AD8@tismer.com>


Uche Ogbuji wrote:
> 
> > [Uche Ogbuji]
> > > Quite interesting.  I brought up this *exact* point at the
> > > Stackless BOF at IPC9.  I mentioned that the immediate reason
> > > I was interested in Stackless was to supercharge the efficiency
> > > of 4XSLT.  I think that a stackless 4XSLT could pretty much
> > > annihilate the other processors in the field for performance.
> >
> > Hmm.  I'm interested in clarifying the cost/performance boundaries of the
> > various approaches.  I don't understand XSLT (I don't even know what it is).
> > Do you grok the difference between full-blown Stackless and Icon-style
> > generators?
> 
> To a decent extent, based on reading your posts carefully.
> 
> > The correspondent I quoted believed the latter were on-target
> > for XSLT work, and given the way Python works today generators are easier to
> > implement than full-blown Stackless.  But while I can speak with some
> > confidence about the latter, I don't know whether they're sufficient for what
> > you have in mind.
> 
> Based on a discussion with Christian at IPC9, they are.  I should have been
> more clear about that.  My main need is to be able to change a bit of context
> and invoke a different execution path, without going through the full overhead
> of a function call.  XSLT, if written "naturally", tends to involve huge
> numbers of such tweak-context-and-branch operations.
> 
> > If this is some flavor of one-at-time tree-traversal algorithm, generators
> > should suffice.
> >
> > class TreeNode:
> >     # with self.value
> >     #      self.children, a list of TreeNode objects
> >     ...
> >     def generate_kids(self):  # pre-order traversal
> >         suspend self.value
> >         for kid in self.children:
> >             for itskids in kid.generate_kids():
> >                 suspend itskids
> >
> > for k in someTreeNodeObject.generate_kids():
> >     print k
> >
> > So the control-flow is thoroughly natural, but you can only suspend to your
> > immediate invoker (in recursive traversals, this "walks up the chain" of
> > generators for each result).  With explicitly resumable generator objects,
> > multiple trees (or even general graphs -- doesn't much matter) can be
> > traversed in lockstep (or any other interleaving that's desired).
> >
> > Now decide <wink>.
> 
> Suspending only to the invoker should do the trick because it is typically a
> single XSLT instruction that governs multiple tree-operations with varied
> context.
> 
> At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> pretty clear arithmetic progression from those who wanted to use microthreads,
> to those who wanted co-routines, to those who wanted just generators.  The
> generator folks were probably 2/3 of the assembly.  Looks as if many have
> decided, and they seem to agree with you.

Here the exact facts of the poll:

     microthreads: 26
     co-routines:  35
     generators:   44

I think this reads a little different.

ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From jack at oratrix.nl  Wed Mar 21 13:57:53 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 13:57:53 +0100
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: Message by "Tim Peters" <tim.one@home.com> ,
	     Tue, 20 Mar 2001 22:33:15 -0500 , <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com> 
Message-ID: <20010321125753.9D98B36B2C1@snelboot.oratrix.nl>

> Everyone!  Run this program under current CVS:
> 
> x = 0.0
> print "%.17g" % -x
> print "%+.17g" % -x
> 
> What do you get?  WinTel prints "0" for the first and "+0" for the second.

Macintosh: -0 for both.
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From thomas at xs4all.net  Wed Mar 21 14:07:04 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Wed, 21 Mar 2001 14:07:04 +0100
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.22504.605383.113425@mace.lucasdigital.com>; from tommy@ilm.com on Tue, Mar 20, 2001 at 11:37:00PM -0800
References: <15032.22504.605383.113425@mace.lucasdigital.com>
Message-ID: <20010321140704.R29286@xs4all.nl>

On Tue, Mar 20, 2001 at 11:37:00PM -0800, Flying Cougar Burnette wrote:

> ------------%< snip %<----------------------%< snip %<------------

> test_pty
> The actual stdout doesn't match the expected stdout.
> This much did match (between asterisk lines):
> **********************************************************************
> test_pty
> **********************************************************************
> Then ...
> We expected (repr): 'I'
> But instead we got: '\n'
> test test_pty failed -- Writing: '\n', expected: 'I'
> 
> 
> importing test_pty into an interactive interpreter gives this:
> 
> Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
> Type "copyright", "credits" or "license" for more information.
> >>> import test.test_pty
> Calling master_open()
> Got master_fd '4', slave_name '/dev/ttyq6'
> Calling slave_open('/dev/ttyq6')
> Got slave_fd '5'
> Writing to slave_fd
> 
> I wish to buy a fish license.For my pet fish, Eric.
> calling pty.fork()
> Waiting for child (16654) to finish.
> Child (16654) exited with status 1024.
> >>> 

Hmm. This is probably my test that is a bit gaga. It tries to test the pty
module, but since I can't find any guarantees on how pty's should work, it
probably relies on platform-specific accidents. It does the following:

---
TEST_STRING_1 = "I wish to buy a fish license."
TEST_STRING_2 = "For my pet fish, Eric."

[..]

debug("Writing to slave_fd")
os.write(slave_fd, TEST_STRING_1) # should check return value
print os.read(master_fd, 1024)

os.write(slave_fd, TEST_STRING_2[:5])
os.write(slave_fd, TEST_STRING_2[5:])
print os.read(master_fd, 1024)
---

Apparently, irix buffers the first write somewhere. Can you test if the
following works better:

---
TEST_STRING_1 = "I wish to buy a fish license.\n"
TEST_STRING_2 = "For my pet fish, Eric.\n"

[..]

debug("Writing to slave_fd")
os.write(slave_fd, TEST_STRING_1) # should check return value
sys.stdout.write(os.read(master_fd, 1024))

os.write(slave_fd, TEST_STRING_2[:5])
os.write(slave_fd, TEST_STRING_2[5:])
sys.stdout.write(os.read(master_fd, 1024))
---

(There should be no need to regenerate the output file, but if it still
fails on the same spot, try running it in verbose and see if you still have
the blank line after 'writing to slave_fd'.)

Note that the pty module is working fine, it's just the test that is screwed
up. Out of curiosity, is the test_openpty test working, or is it skipped ?

I see I also need to fix some other stuff in there, but I'll wait with that
until I hear that this works better :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From jack at oratrix.nl  Wed Mar 21 14:30:32 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 21 Mar 2001 14:30:32 +0100
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: Message by Guido van Rossum <guido@digicool.com> ,
	     Tue, 20 Mar 2001 16:16:32 -0500 , <200103202116.QAA01770@cj20424-a.reston1.va.home.com> 
Message-ID: <20010321133032.9906836B2C1@snelboot.oratrix.nl>

It turns out that even simple things like 0j/2 return -0.0.

The culprit appears to be the statement
    r.imag = (a.imag - a.real*ratio) / denom;
in c_quot(), line 108.

The inner part is translated into a PPC multiply-subtract instruction
	fnmsub   fp0, fp1, fp31, fp0
Or, in other words, this computes "0.0 - (2.0 * 0.0)". The result of this is 
apparently -0.0. This sounds reasonable to me, or is this against IEEE754 
rules (or C99 rules?).

If this is all according to 754 rules the one puzzle remaining is why other 
754 platforms don't see the same thing. Could it be that the combined 
multiply-subtract skips a rounding step that separate multiply and subtract 
instructions would take? My floating point knowledge is pretty basic, so 
please enlighten me....
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From guido at digicool.com  Wed Mar 21 15:36:49 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 09:36:49 -0500
Subject: [Python-Dev] Editor sought for Quick Python Book 2nd ed.
Message-ID: <200103211436.JAA04108@cj20424-a.reston1.va.home.com>

The publisher of the Quick Python Book has approached me looking for
an editor for the second edition.  Anybody interested?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From uche.ogbuji at fourthought.com  Wed Mar 21 15:42:04 2001
From: uche.ogbuji at fourthought.com (Uche Ogbuji)
Date: Wed, 21 Mar 2001 07:42:04 -0700
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: Message from Christian Tismer <tismer@tismer.com> 
   of "Wed, 21 Mar 2001 13:52:05 +0100." <3AB8A3F5.D79F7AD8@tismer.com> 
Message-ID: <200103211442.HAA21574@localhost.localdomain>

> > At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> > pretty clear arithmetic progression from those who wanted to use microthreads,
> > to those who wanted co-routines, to those who wanted just generators.  The
> > generator folks were probably 2/3 of the assembly.  Looks as if many have
> > decided, and they seem to agree with you.
> 
> Here the exact facts of the poll:
> 
>      microthreads: 26
>      co-routines:  35
>      generators:   44
> 
> I think this reads a little different.

Either you're misreading me or I'm misreading you, because your facts seem to 
*exactly* corroborate what I said.  26 -> 35 -> 44 is pretty much an 
arithmetic progression, and it's exactly in the direction I mentioned 
(microthreads -> co-routines -> generators), so what difference do you see?

Of course my 2/3 number is a guess.  60 - 70 total people in the room strikes 
my memory rightly.  Anyone else?


-- 
Uche Ogbuji                               Principal Consultant
uche.ogbuji at fourthought.com               +1 303 583 9900 x 101
Fourthought, Inc.                         http://Fourthought.com 
4735 East Walnut St, Ste. C, Boulder, CO 80301-2537, USA
Software-engineering, knowledge-management, XML, CORBA, Linux, Python





From skip at pobox.com  Wed Mar 21 15:46:51 2001
From: skip at pobox.com (Skip Montanaro)
Date: Wed, 21 Mar 2001 08:46:51 -0600 (CST)
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
References: <20010320204742.BC08AEA11D@oratrix.oratrix.nl>
	<LNBBLJKPBEHFEDALKOLCMEBBJHAA.tim.one@home.com>
Message-ID: <15032.48859.744374.786895@beluga.mojam.com>

    Tim> Everyone!  Run this program under current CVS:
    Tim> x = 0.0
    Tim> print "%.17g" % -x
    Tim> print "%+.17g" % -x

    Tim> What do you get?

% ./python
Python 2.1b2 (#2, Mar 21 2001, 08:43:16) 
[GCC 2.95.3 19991030 (prerelease)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

% ldd ./python
        libpthread.so.0 => /lib/libpthread.so.0 (0x4001a000)
        libdl.so.2 => /lib/libdl.so.2 (0x4002d000)
        libutil.so.1 => /lib/libutil.so.1 (0x40031000)
        libm.so.6 => /lib/libm.so.6 (0x40034000)
        libc.so.6 => /lib/libc.so.6 (0x40052000)
        /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)

libc appears to actually be GNU libc 2.1.3.



From tismer at tismer.com  Wed Mar 21 15:52:14 2001
From: tismer at tismer.com (Christian Tismer)
Date: Wed, 21 Mar 2001 15:52:14 +0100
Subject: FW: FW: [Python-Dev] Simple generator implementation
References: <200103211442.HAA21574@localhost.localdomain>
Message-ID: <3AB8C01E.867B9C5C@tismer.com>


Uche Ogbuji wrote:
> 
> > > At IPC9, Guido put up a poll of likely use of stackless features, and it was a
> > > pretty clear arithmetic progression from those who wanted to use microthreads,
> > > to those who wanted co-routines, to those who wanted just generators.  The
> > > generator folks were probably 2/3 of the assembly.  Looks as if many have
> > > decided, and they seem to agree with you.
> >
> > Here the exact facts of the poll:
> >
> >      microthreads: 26
> >      co-routines:  35
> >      generators:   44
> >
> > I think this reads a little different.
> 
> Either you're misreading me or I'm misreading you, because your facts seem to
> *exactly* corroborate what I said.  26 -> 35 -> 44 is pretty much an
> arithmetic progression, and it's exactly in the direction I mentioned
> (microthreads -> co-routines -> generators), so what difference do you see?
> 
> Of course my 2/3 number is a guess.  60 - 70 total people in the room strikes
> my memory rightly.  Anyone else?

You are right, I was misunderstanding you. I thought 2/3rd of
all votes were in favor of generators, while my picture
is "most want generators, but the others are of comparable
interest".

sorry - ciao - chris

-- 
Christian Tismer             :^)   <mailto:tismer at tismer.com>
Mission Impossible 5oftware  :     Have a break! Take a ride on Python's
Kaunstr. 26                  :    *Starship* http://starship.python.net/
14163 Berlin                 :     PGP key -> http://wwwkeys.pgp.net/
PGP Fingerprint       E182 71C7 1A9D 66E9 9D15  D3CC D4D7 93E2 1FAE F6DF
     where do you want to jump today?   http://www.stackless.com/



From mwh21 at cam.ac.uk  Wed Mar 21 16:39:40 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 15:39:40 +0000
Subject: [Python-Dev] Nested scopes core dump
In-Reply-To: "Tim Peters"'s message of "Tue, 20 Mar 2001 11:01:21 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEONJGAA.tim.one@home.com>
Message-ID: <m3vgp3f7wj.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> [Michael Hudson]
> >>> Maybe you could do the check for resize *after* the call to
> >>> insertdict?  I think that would work, but I wouldn't like to go
> >>> messing with such a performance critical bit of code without some
> >>> careful thinking.
> 
> [Guido]
> >> No, that could still decide to resize, couldn't it?
> 
> [Michael]
> > Yes, but not when you're inserting on a key that is already in the
> > dictionary - because the resize would have happened when the key was
> > inserted into the dictionary, and thus the problem we're seeing here
> > wouldn't happen.
> 
> Careful:  this comment is only half the truth:
> 
> 	/* if fill >= 2/3 size, double in size */

Yes, that could be clearer.  I was confused by the distinction between
ma_used and ma_fill for a bit.

> The dictresize following is also how dicts *shrink*.  That is, build
> up a dict, delete a whole bunch of keys, and nothing at all happens
> to the size until you call setitem again (actually, I think you need
> to call it more than once -- the behavior is tricky).

Well, as I read it, if you delete a bunch of keys and then insert the
same keys again (as in pybench's SimpleDictManipulation), no resize
will happen because ma_fill will be unaffected.  A resize will only
happen if you fill up enough slots to get the 

    mp->ma_fill*3 >= mp->ma_size*2

to trigger.

> In any case, that a key is already in the dict does not guarantee
> that a dict won't resize (via shrinking) when doing a setitem.

Yes.  But I still think that the patch I posted here (the one that
checks for resize after the call to insertdict in PyDict_SetItem)
yesterday will suffice; even if you've deleted a bunch of keys,
ma_fill will be unaffected by the deletes so the size check before the
insertdict won't be triggered (becasue it wasn't by the one after the
call to insertdict in the last call to setitem) and neither will the
size check after the call to insertdict won't be triggered (because
you're inserting on a key already in the dictionary and so ma_fill
will be unchagned).  But this is mighty fragile; something more
explicit is almost certainly a good idea.

So someone should either

> bite the bullet and add a new PyDict_AdjustSize function, just
> duplicating the resize logic.  

or just put a check in PyDict_Next, or outlaw this practice and fix
the places that do it.  And then document the conclusion.  And do it
before 2.1b2 on Friday.  I'll submit a patch, unless you're very
quick.

> Delicate, though.

Uhh, I'd say so.

Cheers,
M.

-- 
 Very clever implementation techniques are required to implement this
 insanity correctly and usefully, not to mention that code written
 with this feature used and abused east and west is exceptionally
 exciting to debug.       -- Erik Naggum on Algol-style "call-by-name"




From jeremy at alum.mit.edu  Wed Mar 21 16:51:28 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 10:51:28 -0500 (EST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>
References: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz>
	<LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com>
Message-ID: <15032.52736.537333.260718@w221.z064000254.bwi-md.dsl.cnc.net>

On the subject of keyword preferences, I like yield best because I
first saw iterators (Icon's generators) in CLU and CLU uses yield.

Jeremy



From jeremy at alum.mit.edu  Wed Mar 21 16:56:35 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 10:56:35 -0500 (EST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.22504.605383.113425@mace.lucasdigital.com>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
Message-ID: <15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>

The test_symtable crash is a shallow one.  There's a dependency
between a .h file and the extension module that isn't captured in the
setup.py.  I think you can delete _symtablemodule.o and rebuild -- or
do a make clean.  It should work then.

Jeremy



From tommy at ilm.com  Wed Mar 21 18:02:48 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Wed, 21 Mar 2001 09:02:48 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
	<15032.53043.180771.612275@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <15032.57011.412823.462175@mace.lucasdigital.com>

That did it.  thanks!

Jeremy Hylton writes:
| The test_symtable crash is a shallow one.  There's a dependency
| between a .h file and the extension module that isn't captured in the
| setup.py.  I think you can delete _symtablemodule.o and rebuild -- or
| do a make clean.  It should work then.
| 
| Jeremy



From tommy at ilm.com  Wed Mar 21 18:08:49 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Wed, 21 Mar 2001 09:08:49 -0800 (PST)
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <20010321140704.R29286@xs4all.nl>
References: <15032.22504.605383.113425@mace.lucasdigital.com>
	<20010321140704.R29286@xs4all.nl>
Message-ID: <15032.57243.391141.409534@mace.lucasdigital.com>

Hey Thomas,

with these changes to test_pty.py I now get:

test_pty
The actual stdout doesn't match the expected stdout.
This much did match (between asterisk lines):
**********************************************************************
test_pty
**********************************************************************
Then ...
We expected (repr): 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
But instead we got: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n'
test test_pty failed -- Writing: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n', expected: 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'

but when I import test.test_pty that blank line is gone.  Sounds like
the test verification just needs to be a bit more flexible, maybe?

test_openpty passes without a problem, BTW.



Thomas Wouters writes:
| On Tue, Mar 20, 2001 at 11:37:00PM -0800, Flying Cougar Burnette wrote:
| 
| > ------------%< snip %<----------------------%< snip %<------------
| 
| > test_pty
| > The actual stdout doesn't match the expected stdout.
| > This much did match (between asterisk lines):
| > **********************************************************************
| > test_pty
| > **********************************************************************
| > Then ...
| > We expected (repr): 'I'
| > But instead we got: '\n'
| > test test_pty failed -- Writing: '\n', expected: 'I'
| > 
| > 
| > importing test_pty into an interactive interpreter gives this:
| > 
| > Python 2.1b2 (#27, Mar 20 2001, 23:21:17) [C] on irix6
| > Type "copyright", "credits" or "license" for more information.
| > >>> import test.test_pty
| > Calling master_open()
| > Got master_fd '4', slave_name '/dev/ttyq6'
| > Calling slave_open('/dev/ttyq6')
| > Got slave_fd '5'
| > Writing to slave_fd
| > 
| > I wish to buy a fish license.For my pet fish, Eric.
| > calling pty.fork()
| > Waiting for child (16654) to finish.
| > Child (16654) exited with status 1024.
| > >>> 
| 
| Hmm. This is probably my test that is a bit gaga. It tries to test the pty
| module, but since I can't find any guarantees on how pty's should work, it
| probably relies on platform-specific accidents. It does the following:
| 
| ---
| TEST_STRING_1 = "I wish to buy a fish license."
| TEST_STRING_2 = "For my pet fish, Eric."
| 
| [..]
| 
| debug("Writing to slave_fd")
| os.write(slave_fd, TEST_STRING_1) # should check return value
| print os.read(master_fd, 1024)
| 
| os.write(slave_fd, TEST_STRING_2[:5])
| os.write(slave_fd, TEST_STRING_2[5:])
| print os.read(master_fd, 1024)
| ---
| 
| Apparently, irix buffers the first write somewhere. Can you test if the
| following works better:
| 
| ---
| TEST_STRING_1 = "I wish to buy a fish license.\n"
| TEST_STRING_2 = "For my pet fish, Eric.\n"
| 
| [..]
| 
| debug("Writing to slave_fd")
| os.write(slave_fd, TEST_STRING_1) # should check return value
| sys.stdout.write(os.read(master_fd, 1024))
| 
| os.write(slave_fd, TEST_STRING_2[:5])
| os.write(slave_fd, TEST_STRING_2[5:])
| sys.stdout.write(os.read(master_fd, 1024))
| ---
| 
| (There should be no need to regenerate the output file, but if it still
| fails on the same spot, try running it in verbose and see if you still have
| the blank line after 'writing to slave_fd'.)
| 
| Note that the pty module is working fine, it's just the test that is screwed
| up. Out of curiosity, is the test_openpty test working, or is it skipped ?
| 
| I see I also need to fix some other stuff in there, but I'll wait with that
| until I hear that this works better :)
| 
| -- 
| Thomas Wouters <thomas at xs4all.net>
| 
| Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From barry at digicool.com  Wed Mar 21 18:40:21 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 21 Mar 2001 12:40:21 -0500
Subject: [Python-Dev] PEP 1, PEP Purpose and Guidelines
Message-ID: <15032.59269.4520.961715@anthem.wooz.org>

With everyone feeling so PEPpy lately (yay!) I thought it was time to
do an updating pass through PEP 1.  Attached below is the latest copy,
also available (as soon as uploading is complete) via

    http://python.sourceforge.net/peps/pep-0001.html

Note the addition of the Replaces: and Replaced-By: headers for
formalizing the PEP replacement policy (thanks to Andrew Kuchling for
the idea and patch).

Enjoy,
-Barry

-------------------- snip snip --------------------
PEP: 1
Title: PEP Purpose and Guidelines
Version: $Revision: 1.16 $
Author: barry at digicool.com (Barry A. Warsaw),
    jeremy at digicool.com (Jeremy Hylton)
Status: Draft
Type: Informational
Created: 13-Jun-2000
Post-History: 21-Mar-2001


What is a PEP?

    PEP stands for Python Enhancement Proposal.  A PEP is a design
    document providing information to the Python community, or
    describing a new feature for Python.  The PEP should provide a
    concise technical specification of the feature and a rationale for
    the feature.

    We intend PEPs to be the primary mechanisms for proposing new
    features, for collecting community input on an issue, and for
    documenting the design decisions that have gone into Python.  The
    PEP author is responsible for building consensus within the
    community and documenting dissenting opinions.

    Because the PEPs are maintained as plain text files under CVS
    control, their revision history is the historical record of the
    feature proposal[1].
    

Kinds of PEPs

    There are two kinds of PEPs.  A standards track PEP describes a
    new feature or implementation for Python.  An informational PEP
    describes a Python design issue, or provides general guidelines or
    information to the Python community, but does not propose a new
    feature.


PEP Work Flow

    The PEP editor, Barry Warsaw <barry at digicool.com>, assigns numbers
    for each PEP and changes its status.

    The PEP process begins with a new idea for Python.  Each PEP must
    have a champion -- someone who writes the PEP using the style and
    format described below, shepherds the discussions in the
    appropriate forums, and attempts to build community consensus
    around the idea.  The PEP champion (a.k.a. Author) should first
    attempt to ascertain whether the idea is PEP-able.  Small
    enhancements or patches often don't need a PEP and can be injected
    into the Python development work flow with a patch submission to
    the SourceForge patch manager[2] or feature request tracker[3].

    The PEP champion then emails the PEP editor with a proposed title
    and a rough, but fleshed out, draft of the PEP.  This draft must
    be written in PEP style as described below.

    If the PEP editor approves, he will assign the PEP a number, label
    it as standards track or informational, give it status 'draft',
    and create and check-in the initial draft of the PEP.  The PEP
    editor will not unreasonably deny a PEP.  Reasons for denying PEP
    status include duplication of effort, being technically unsound,
    or not in keeping with the Python philosophy.  The BDFL
    (Benevolent Dictator for Life, Guido van Rossum
    <guido at python.org>) can be consulted during the approval phase,
    and is the final arbitrator of the draft's PEP-ability.

    The author of the PEP is then responsible for posting the PEP to
    the community forums, and marshaling community support for it.  As
    updates are necessary, the PEP author can check in new versions if
    they have CVS commit permissions, or can email new PEP versions to
    the PEP editor for committing.

    Standards track PEPs consists of two parts, a design document and
    a reference implementation.  The PEP should be reviewed and
    accepted before a reference implementation is begun, unless a
    reference implementation will aid people in studying the PEP.
    Standards Track PEPs must include an implementation - in the form
    of code, patch, or URL to same - before it can be considered
    Final.

    PEP authors are responsible for collecting community feedback on a
    PEP before submitting it for review.  A PEP that has not been
    discussed on python-list at python.org and/or python-dev at python.org
    will not be accepted.  However, wherever possible, long open-ended
    discussions on public mailing lists should be avoided.  A better
    strategy is to encourage public feedback directly to the PEP
    author, who collects and integrates the comments back into the
    PEP.

    Once the authors have completed a PEP, they must inform the PEP
    editor that it is ready for review.  PEPs are reviewed by the BDFL
    and his chosen consultants, who may accept or reject a PEP or send
    it back to the author(s) for revision.

    Once a PEP has been accepted, the reference implementation must be
    completed.  When the reference implementation is complete and
    accepted by the BDFL, the status will be changed to `Final.'

    A PEP can also be assigned status `Deferred.'  The PEP author or
    editor can assign the PEP this status when no progress is being
    made on the PEP.  Once a PEP is deferred, the PEP editor can
    re-assign it to draft status.

    A PEP can also be `Rejected'.  Perhaps after all is said and done
    it was not a good idea.  It is still important to have a record of
    this fact.

    PEPs can also be replaced by a different PEP, rendering the
    original obsolete.  This is intended for Informational PEPs, where
    version 2 of an API can replace version 1.

    PEP work flow is as follows:

        Draft -> Accepted -> Final -> Replaced
          ^
          +----> Rejected
          v
        Deferred

    Some informational PEPs may also have a status of `Active' if they
    are never meant to be completed.  E.g. PEP 1.


What belongs in a successful PEP?

    Each PEP should have the following parts:

    1. Preamble -- RFC822 style headers containing meta-data about the
       PEP, including the PEP number, a short descriptive title, the
       names contact info for each author, etc.

    2. Abstract -- a short (~200 word) description of the technical
       issue being addressed.

    3. Copyright/public domain -- Each PEP must either be explicitly
       labelled in the public domain or the Open Publication
       License[4].

    4. Specification -- The technical specification should describe
       the syntax and semantics of any new language feature.  The
       specification should be detailed enough to allow competing,
       interoperable implementations for any of the current Python
       platforms (CPython, JPython, Python .NET).

    5. Rationale -- The rationale fleshes out the specification by
       describing what motivated the design and why particular design
       decisions were made.  It should describe alternate designs that
       were considered and related work, e.g. how the feature is
       supported in other languages.

       The rationale should provide evidence of consensus within the
       community and discuss important objections or concerns raised
       during discussion.

    6. Reference Implementation -- The reference implementation must
       be completed before any PEP is given status 'Final,' but it
       need not be completed before the PEP is accepted.  It is better
       to finish the specification and rationale first and reach
       consensus on it before writing code.

       The final implementation must include test code and
       documentation appropriate for either the Python language
       reference or the standard library reference.


PEP Style

    PEPs are written in plain ASCII text, and should adhere to a
    rigid style.  There is a Python script that parses this style and
    converts the plain text PEP to HTML for viewing on the web[5].

    Each PEP must begin with an RFC822 style header preamble.  The
    headers must appear in the following order.  Headers marked with
    `*' are optional and are described below.  All other headers are
    required.

        PEP: <pep number>
        Title: <pep title>
        Version: <cvs version string>
        Author: <list of authors' email and real name>
      * Discussions-To: <email address>
        Status: <Draft | Active | Accepted | Deferred | Final | Replaced>
        Type: <Informational | Standards Track>
        Created: <date created on, in dd-mmm-yyyy format>
      * Python-Version: <version number>
        Post-History: <dates of postings to python-list and python-dev>
      * Replaces: <pep number>
      * Replaced-By: <pep number>

    Standards track PEPs must have a Python-Version: header which
    indicates the version of Python that the feature will be released
    with.  Informational PEPs do not need a Python-Version: header.

    While a PEP is in private discussions (usually during the initial
    Draft phase), a Discussions-To: header will indicate the mailing
    list or URL where the PEP is being discussed.  No Discussions-To:
    header is necessary if the PEP is being discussed privately with
    the author, or on the python-list or python-dev email mailing
    lists.

    PEPs may also have a Replaced-By: header indicating that a PEP has
    been rendered obsolete by a later document; the value is the
    number of the PEP that replaces the current document.  The newer
    PEP must have a Replaces: header containing the number of the PEP
    that it rendered obsolete.

    PEP headings must begin in column zero and the initial letter of
    each word must be capitalized as in book titles.  Acronyms should
    be in all capitals.  The body of each section must be indented 4
    spaces.  Code samples inside body sections should be indented a
    further 4 spaces, and other indentation can be used as required to
    make the text readable.  You must use two blank lines between the
    last line of a section's body and the next section heading.

    Tab characters must never appear in the document at all.  A PEP
    should include the Emacs stanza included by example in this PEP.

    A PEP must contain a Copyright section, and it is strongly
    recommended to put the PEP in the public domain.

    You should footnote any URLs in the body of the PEP, and a PEP
    should include a References section with those URLs expanded.


References and Footnotes

    [1] This historical record is available by the normal CVS commands
    for retrieving older revisions.  For those without direct access
    to the CVS tree, you can browse the current and past PEP revisions
    via the SourceForge web site at

    http://cvs.sourceforge.net/cgi-bin/cvsweb.cgi/python/nondist/peps/?cvsroot=python

    [2] http://sourceforge.net/tracker/?group_id=5470&atid=305470

    [3] http://sourceforge.net/tracker/?atid=355470&group_id=5470&func=browse

    [4] http://www.opencontent.org/openpub/

    [5] The script referred to here is pep2html.py, which lives in
    the same directory in the CVS tree as the PEPs themselves.  Try
    "pep2html.py --help" for details.

    The URL for viewing PEPs on the web is
    http://python.sourceforge.net/peps/


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:



From m.favas at per.dem.csiro.au  Wed Mar 21 20:44:30 2001
From: m.favas at per.dem.csiro.au (Mark Favas)
Date: Thu, 22 Mar 2001 03:44:30 +0800
Subject: [Python-Dev] test_coercion failing
Message-ID: <3AB9049E.7331F570@per.dem.csiro.au>

[Tim searches for -0's]
On Tru64 Unix (4.0F) with Compaq's C compiler I get:
Python 2.1b2 (#344, Mar 22 2001, 03:18:25) [C] on osf1V4
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

and on Solaris 8 (Sparc) with gcc I get:
Python 2.1b2 (#23, Mar 22 2001, 03:25:27) 
[GCC 2.95.2 19991024 (release)] on sunos5
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
-0
>>> print "%+.17g" % -x
-0

while on FreeBSD 4.2 with gcc I get:
Python 2.1b2 (#3, Mar 22 2001, 03:36:19) 
[GCC 2.95.2 19991024 (release)] on freebsd4
Type "copyright", "credits" or "license" for more information.
>>> x = 0.0
>>> print "%.17g" % -x
0
>>> print "%+.17g" % -x
+0

-- 
Mark Favas  -   m.favas at per.dem.csiro.au
CSIRO, Private Bag No 5, Wembley, Western Australia 6913, AUSTRALIA



From tim.one at home.com  Wed Mar 21 21:18:54 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 15:18:54 -0500
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: <20010321133032.9906836B2C1@snelboot.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEEOJHAA.tim.one@home.com>

[Jack Jansen]
> It turns out that even simple things like 0j/2 return -0.0.
>
> The culprit appears to be the statement
>     r.imag = (a.imag - a.real*ratio) / denom;
> in c_quot(), line 108.
>
> The inner part is translated into a PPC multiply-subtract instruction
> 	fnmsub   fp0, fp1, fp31, fp0
> Or, in other words, this computes "0.0 - (2.0 * 0.0)". The result
> of this is apparently -0.0. This sounds reasonable to me, or is
> this against IEEE754 rules (or C99 rules?).

I've said it twice, but I'll say it once more <wink>:  under 754 rules,

   (+0) - (+0)

must return +0 in all rounding modes except for (the exceedingly unlikely, as
it's not the default) to-minus-infinity rounding mode.  The latter case is
the only case in which it should return -0.  Under the default
to-nearest/even rounding mode, and under the to-plus-infinity and to-0
rounding modes, +0 is the required result.

However, we don't know whether a.imag is +0 or -0 on your box; it *should* be
+0.  If it were -0, then

   (-0) - (+0)

should indeed be -0 under default 754 rules.  So this still needs to be
traced back.  That is, when you say it computes ""0.0 - (2.0 * 0.0)", there
are four *possible* things that could mean, depending on the signs of the
zeroes.  As is, I'm afraid we still don't know enough to say whether the -0
result is due to an unexpected -0 as one the inputs.

> If this is all according to 754 rules the one puzzle remaining is
> why other 754 platforms don't see the same thing.

Because the antecedent is wrong:  the behavior you're seeing violates 754
rules (unless you've somehow managed to ask for to-minus-infinity rounding,
or you're getting -0 inputs for bogus reasons).

Try this:

    print repr(1.0 - 1e-100)

If that doesn't display "1.0", but something starting "0.9999"..., then
you've somehow managed to get to-minus-infinity rounding.

Another thing to try:

    print 2+0j

Does that also come out as "2-0j" for you?

What about:

    print repr((0j).real), repr((0j).imag)

?  (I'm trying to see whether -0 parts somehow get invented out of thin air.)

> Could it be that the combined multiply-subtract skips a rounding
> step that separate multiply and subtract instructions would take? My
> floating point knowledge is pretty basic, so please enlighten me....

I doubt this has anything to do with the fused mul-sub.  That operation isn't
defined as such by 754, but it would be a mondo serious hardware bug if it
didn't operate on endcase values the same way as separate mul-then-sub.
OTOH, the new complex division algorithm may generate a fused mul-sub in
places where the old algorithm did not, so I can't rule that out either.

BTW, most compilers for boxes with fused mul-add have a switch to disable
generating the fused instructions.  Might want to give that a try (if you
have such a switch, it may mask the symptom but leave the cause unknown).




From tim.one at home.com  Wed Mar 21 21:45:09 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 15:45:09 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
Message-ID: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>

When running the full test suite, test_doctest fails (in current CVS; did not
fail yesterday).  This was on Windows.  Other platforms?

Does not fail in isolation.  Doesn't matter whether or not .pyc files are
deleted first, and doesn't matter whether a regular or debug build of Python
is used.

In four runs of the full suite with regrtest -r (randomize test order),
test_doctest failed twice and passed twice.  So it's unlikely this has
something specifically to do with doctest.

roll-out-the-efence?-ly y'rs  - tim




From jeremy at alum.mit.edu  Wed Mar 21 21:41:53 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 15:41:53 -0500 (EST)
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
Message-ID: <15033.4625.822632.276247@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "TP" == Tim Peters <tim.one at home.com> writes:

  TP> In four runs of the full suite with regrtest -r (randomize test
  TP> order), test_doctest failed twice and passed twice.  So it's
  TP> unlikely this has something specifically to do with doctest.

How does doctest fail?  Does that give any indication of the nature of
the problem?  Does it fail with a core dump (or whatever Windows does
instead)?  Or is the output wrong?

Jeremy



From guido at digicool.com  Wed Mar 21 22:01:12 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 16:01:12 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Your message of "Wed, 21 Mar 2001 15:45:09 EST."
             <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com> 
Message-ID: <200103212101.QAA11781@cj20424-a.reston1.va.home.com>

> When running the full test suite, test_doctest fails (in current CVS; did not
> fail yesterday).  This was on Windows.  Other platforms?
> 
> Does not fail in isolation.  Doesn't matter whether or not .pyc files are
> deleted first, and doesn't matter whether a regular or debug build of Python
> is used.
> 
> In four runs of the full suite with regrtest -r (randomize test order),
> test_doctest failed twice and passed twice.  So it's unlikely this has
> something specifically to do with doctest.

Last time we had something like this it was a specific dependency
between two test modules, where if test_A was imported before test_B,
things were fine, but in the other order one of them would fail.

I noticed that someone (Jeremy?) checked in a whole slew of changes to
test modules, including test_support.  I also noticed that stuff was
added to test_support that would show up if you did "from test_support
import *".  I believe previously this was intended to only export a
small number of things; now it exports more, e.g. unittest, os, and
sys.  But that doesn't look like it would make much of a difference.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Wed Mar 21 22:03:40 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 21:03:40 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
Message-ID: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> When running the full test suite, test_doctest fails (in current CVS; did not
> fail yesterday).  This was on Windows.  Other platforms?

Yes.  Linux.

I'm getting:

We expected (repr): 'doctest.Tester.runstring.__doc__'
But instead we got: 'doctest.Tester.summarize.__doc__'

> Does not fail in isolation.  

Indeed.

How does doctest order it's tests?  I bet the changes just made to
dictobject.c make the order of dict.items() slightly unpredictable
(groan).

Cheers,
M.

-- 
81. In computing, turning the obvious into the useful is a living
    definition of the word "frustration".
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From jeremy at alum.mit.edu  Wed Mar 21 21:54:05 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 21 Mar 2001 15:54:05 -0500 (EST)
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
References: <LNBBLJKPBEHFEDALKOLCAEFCJHAA.tim.one@home.com>
	<m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <15033.5357.471974.18878@w221.z064000254.bwi-md.dsl.cnc.net>

>>>>> "MWH" == Michael Hudson <mwh21 at cam.ac.uk> writes:

  MWH> "Tim Peters" <tim.one at home.com> writes:
  >> When running the full test suite, test_doctest fails (in current
  >> CVS; did not fail yesterday).  This was on Windows.  Other
  >> platforms?

  MWH> Yes.  Linux.

Interesting.  I've done four runs (-r) and not seen any errors on my
Linux box.  Maybe I'm just unlucky.

Jeremy



From tim.one at home.com  Wed Mar 21 22:13:14 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 16:13:14 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <15033.4625.822632.276247@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFEJHAA.tim.one@home.com>

[Jeremy]
> How does doctest fail?  Does that give any indication of the nature of
> the problem?  Does it fail with a core dump (or whatever Windows does
> instead)?  Or is the output wrong?

Sorry, I should know better than to say "doesn't work".  It's that the output
is wrong:

It's good up through the end of this section of output:

...
1 items had failures:
   1 of   2 in XYZ
4 tests in 2 items.
3 passed and 1 failed.
***Test Failed*** 1 failures.
(1, 4)
ok
0 of 6 examples failed in doctest.Tester.__doc__
Running doctest.Tester.__init__.__doc__
0 of 0 examples failed in doctest.Tester.__init__.__doc__
Running doctest.Tester.run__test__.__doc__
0 of 0 examples failed in doctest.Tester.run__test__.__doc__
Running


But then:

We expected (repr): 'doctest.Tester.runstring.__doc__'
But instead we got: 'doctest.Tester.summarize.__doc__'


Hmm!  Perhaps doctest is merely running sub-tests in a different order.
doctest uses whatever order dict.items() returns (for the module __dict__ and
class __dict__s, etc).  It should probably force the order.  I'm going to get
something to eat and ponder that ... if true, The Mystery is how the internal
dicts could get *built* in a different order across runs ...

BTW, does or doesn't a run of the full test suite complain here too under
your Linux box?




From tim.one at home.com  Wed Mar 21 22:17:39 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 16:17:39 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3pufag7gz.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEFFJHAA.tim.one@home.com>

[Michael Hudson]
> Yes.  Linux.
>
> I'm getting:
>
> We expected (repr): 'doctest.Tester.runstring.__doc__'
> But instead we got: 'doctest.Tester.summarize.__doc__'

Same thing, then (Jeremy, *don't* use -r).

>> Does not fail in isolation.

> Indeed.

> How does doctest order it's tests?  I bet the changes just made to
> dictobject.c make the order of dict.items() slightly unpredictable
> (groan).

As just posted, doctest uses whatever .items() returns but probably
shouldn't.  It's hard to see how the dictobject.c changes could affect that,
but I have to agree they're the most likley suspect.  I'll back those out
locally and see whether the problem persists.

But I'm going to eat first!




From michel at digicool.com  Wed Mar 21 22:44:29 2001
From: michel at digicool.com (Michel Pelletier)
Date: Wed, 21 Mar 2001 13:44:29 -0800 (PST)
Subject: [Python-Dev] PEP 245: Python Interfaces
Message-ID: <Pine.LNX.4.32.0103211340050.25303-100000@localhost.localdomain>

Barry has just checked in PEP 245 for me.

http://python.sourceforge.net/peps/pep-0245.html

I'd like to open up the discussion phase on this PEP to anyone who is
interested in commenting on it.  I'm not sure of the proper forum, it has
been discussed to some degree on the types-sig.

Thanks,

-Michel




From mwh21 at cam.ac.uk  Wed Mar 21 23:01:15 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 22:01:15 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
References: <LNBBLJKPBEHFEDALKOLCOEFFJHAA.tim.one@home.com>
Message-ID: <m3elvqg4t0.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> [Michael Hudson]
> > Yes.  Linux.
> >
> > I'm getting:
> >
> > We expected (repr): 'doctest.Tester.runstring.__doc__'
> > But instead we got: 'doctest.Tester.summarize.__doc__'
> 
> Same thing, then (Jeremy, *don't* use -r).
> 
> >> Does not fail in isolation.
> 
> > Indeed.
> 
> > How does doctest order it's tests?  I bet the changes just made to
> > dictobject.c make the order of dict.items() slightly unpredictable
> > (groan).
> 
> As just posted, doctest uses whatever .items() returns but probably
> shouldn't.  It's hard to see how the dictobject.c changes could
> affect that, but I have to agree they're the most likley suspect.

> I'll back those out locally and see whether the problem persists.

Fixes things here.

Oooh, look at this:

$ ../../python 
Python 2.1b2 (#3, Mar 21 2001, 21:29:14) 
[GCC 2.95.1 19990816/Linux (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import doctest
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', '_Tester__record_outcome', 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge', 'rundoc', '__module__']
>>> doctest.testmod(doctest)
(0, 53)
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', 'summarize', '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc', '_Tester__record_outcome', '__module__']

Indeed:

$ ../../python 
Python 2.1b2 (#3, Mar 21 2001, 21:29:14) 
[GCC 2.95.1 19990816/Linux (release)] on linux2
Type "copyright", "credits" or "license" for more information.
>>> import doctest
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', '_Tester__record_outcome', 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge', 'rundoc', '__module__']
>>> doctest.Tester.__dict__['__doc__'] = doctest.Tester.__dict__['__doc__']
>>> doctest.Tester.__dict__.keys()
['__init__', '__doc__', 'run__test__', 'summarize', '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc', '_Tester__record_outcome', '__module__']

BUT, and this is where I give up:

    This has always happened!  It even happens with Python 1.5.2!

it just makes a difference now.  So maybe it's something else entirely.

Cheers,
M.

-- 
  MARVIN:  Do you want me to sit in a corner and rust, or just fall
           apart where I'm standing?
                    -- The Hitch-Hikers Guide to the Galaxy, Episode 2




From tim.one at home.com  Wed Mar 21 23:30:52 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 17:30:52 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: <m3elvqg4t0.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>

[Michael Hudson]
> Oooh, look at this:
>
> $ ../../python
> Python 2.1b2 (#3, Mar 21 2001, 21:29:14)
> [GCC 2.95.1 19990816/Linux (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import doctest
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', '_Tester__record_outcome',
> 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge',
> 'rundoc', '__module__']
> >>> doctest.testmod(doctest)
> (0, 53)
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', 'summarize',
> '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc',
> '_Tester__record_outcome', '__module__']

Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
since the dict has 11 items, it's exactly at the boundary where PyDict_Next
will now resize it.

> Indeed:
>
> $ ../../python
> Python 2.1b2 (#3, Mar 21 2001, 21:29:14)
> [GCC 2.95.1 19990816/Linux (release)] on linux2
> Type "copyright", "credits" or "license" for more information.
> >>> import doctest
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', '_Tester__record_outcome',
> 'runstring', 'summarize', '_Tester__runone', 'rundict', 'merge',
> 'rundoc', '__module__']
> >>> doctest.Tester.__dict__['__doc__'] = doctest.Tester.__dict__['__doc__']
> >>> doctest.Tester.__dict__.keys()
> ['__init__', '__doc__', 'run__test__', 'summarize',
> '_Tester__runone', 'rundict', 'merge', 'runstring', 'rundoc',
> '_Tester__record_outcome', '__module__']
>
> BUT, and this is where I give up:
>
>     This has always happened!  It even happens with Python 1.5.2!

Yes, but in this case you did an explicit setitem, and PyDict_SetItem *will*
resize it (because it started with 11 entries:  11*3 >= 16*2, but 10*3 <
16*2).  Nothing has changed there in many years.

> it just makes a difference now.  So maybe it's something else entirely.

Well, nobody should rely on the order of dict.items().  Curiously, doctest
actually doesn't, but the order of its verbose-mode *output* blocks changes,
and it's the regrtest.py framework that cares about that.

I'm calling this one a bug in doctest.py, and will fix it there.  Ugly:
since we can longer rely on list.sort() not raising exceptions, it won't be
enough to replace the existing

    for k, v in dict.items():

with

    items = dict.items()
    items.sort()
    for k, v in items:

I guess

    keys = dict.keys()
    keys.sort()
    for k in keys:
        v = dict[k]

is the easiest safe alternative (these are namespace dicts, btw, so it's
certain the keys are all strings).

thanks-for-the-help!-ly y'rs  - tim




From guido at digicool.com  Wed Mar 21 23:36:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 21 Mar 2001 17:36:13 -0500
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Your message of "Wed, 21 Mar 2001 17:30:52 EST."
             <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com> 
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com> 
Message-ID: <200103212236.RAA12977@cj20424-a.reston1.va.home.com>

> Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
> since the dict has 11 items, it's exactly at the boundary where PyDict_Next
> will now resize it.

It *could* be the garbage collector.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mwh21 at cam.ac.uk  Thu Mar 22 00:24:33 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 21 Mar 2001 23:24:33 +0000
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: Guido van Rossum's message of "Wed, 21 Mar 2001 17:36:13 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com> <200103212236.RAA12977@cj20424-a.reston1.va.home.com>
Message-ID: <m3ae6eg0y6.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> > Cute!  Hard to explain, unless someone is using PyDict_Next on this dict:
> > since the dict has 11 items, it's exactly at the boundary where PyDict_Next
> > will now resize it.
> 
> It *could* be the garbage collector.

I think it would have to be; there just aren't that many calls to
PyDict_Next around.  I confused myself by thinking that calling keys()
called PyDict_Next, but it doesn't.

glad-that-one's-sorted-out-ly y'rs
M.

-- 
  "The future" has arrived but they forgot to update the docs.
                                        -- R. David Murray, 9 May 2000




From greg at cosc.canterbury.ac.nz  Thu Mar 22 02:37:00 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Mar 2001 13:37:00 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <3AB87C4E.450723C2@lemburg.com>
Message-ID: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>

"M.-A. Lemburg" <mal at lemburg.com>:

> XXX The functions here don't copy the resource fork or other metadata on Mac.

Wouldn't it be better to fix these functions on the Mac
instead of depriving everyone else of them?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From greg at cosc.canterbury.ac.nz  Thu Mar 22 02:39:05 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Thu, 22 Mar 2001 13:39:05 +1200 (NZST)
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <012601c0b1d8$7dc3cc50$e46940d5@hagrid>
Message-ID: <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>

Fredrik Lundh <fredrik at effbot.org>:

> I associate "yield" with non-preemptive threading (yield
> to anyone else, not necessarily my caller).

Well, this flavour of generators is sort of a special case
subset of non-preemptive threading, so the usage is not
entirely inconsistent.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tim.one at home.com  Thu Mar 22 02:41:02 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 21 Mar 2001 20:41:02 -0500
Subject: [Python-Dev] test_coercion failing
In-Reply-To: <15032.22433.953503.130175@mace.lucasdigital.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEGKJHAA.tim.one@home.com>

[Flying Cougar Burnette]
> I get the same ("0" then "+0") on my irix65 O2.  test_coerce succeeds
> as well.

Tommy, it's great to hear that Irix screws up signed-zero output too!  The
two computer companies I own stock in are SGI and Microsoft.  I'm sure this
isn't a coincidence <wink>.

i'll-use-linux-when-it-gets-rid-of-those-damn-sign-bits-ly y'rs  - tim




From represearch at yahoo.com  Wed Mar 21 19:46:00 2001
From: represearch at yahoo.com (reptile research)
Date: Wed, 21 Mar 2001 19:46:00
Subject: [Python-Dev] (no subject)
Message-ID: <E14fu8l-0000lc-00@mail.python.org>



From nhodgson at bigpond.net.au  Thu Mar 22 03:07:28 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Thu, 22 Mar 2001 13:07:28 +1100
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
References: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
Message-ID: <034601c0b274$d8bab8c0$8119fea9@neil>

Greg Ewing:
> "M.-A. Lemburg" <mal at lemburg.com>:
>
> > XXX The functions here don't copy the resource fork or other metadata on
Mac.
>
> Wouldn't it be better to fix these functions on the Mac
> instead of depriving everyone else of them?

   Then they should be fixed for Windows as well where they don't copy
secondary forks either. While not used much by native code, forks are
commonly used on NT servers which serve files to Macintoshes.

   There is also the issue of other metadata. Should shutil optionally copy
ownership information? Access Control Lists? Summary information? A really
well designed module here could be very useful but quite some work.

   Neil




From nhodgson at bigpond.net.au  Thu Mar 22 03:14:22 2001
From: nhodgson at bigpond.net.au (Neil Hodgson)
Date: Thu, 22 Mar 2001 13:14:22 +1100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
References: <200103210449.QAA06409@s454.cosc.canterbury.ac.nz><LNBBLJKPBEHFEDALKOLCGEBOJHAA.tim.one@home.com> <15032.52736.537333.260718@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <035801c0b275$cf667510$8119fea9@neil>

Jeremy Hylton:

> On the subject of keyword preferences, I like yield best because I
> first saw iterators (Icon's generators) in CLU and CLU uses yield.

   For me the benefit of "yield" is that it connotes both transfer of value
and transfer of control, just like "return", while "suspend" only connotes
transfer of control.

   "This tree yields 20 Kilos of fruit each year" and "When merging, yield
to the vehicles to your right".

   Neil




From barry at digicool.com  Thu Mar 22 04:16:30 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 21 Mar 2001 22:16:30 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
References: <3AB87C4E.450723C2@lemburg.com>
	<200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
Message-ID: <15033.28302.876972.730118@anthem.wooz.org>

>>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:

    GE> Wouldn't it be better to fix these functions on the Mac
    GE> instead of depriving everyone else of them?

Either way, shutil sure is useful!



From MarkH at ActiveState.com  Thu Mar 22 06:16:09 2001
From: MarkH at ActiveState.com (Mark Hammond)
Date: Thu, 22 Mar 2001 16:16:09 +1100
Subject: [Python-Dev] Unicode and the Windows file system.
In-Reply-To: <LCEPIIGDJPKCOIHOBJEPOEDIDGAA.MarkH@ActiveState.com>
Message-ID: <LCEPIIGDJPKCOIHOBJEPOEKKDGAA.MarkH@ActiveState.com>

I have submitted patch #410465 for this.

http://sourceforge.net/tracker/?func=detail&aid=410465&group_id=5470&atid=30
5470

Comments are in the patch, so I won't repeat them here, but I would
appreciate a few reviews on the code.  Particularly, my addition of a new
format to PyArg_ParseTuple and the resulting extra string copy may raise a
few eye-brows.

I've even managed to include the new test file and its output in the patch,
so it will hopefully apply cleanly and run a full test if you want to try
it.

Thanks,

Mark.




From nas at arctrix.com  Thu Mar 22 06:44:32 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Wed, 21 Mar 2001 21:44:32 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>; from tim_one@email.msn.com on Tue, Mar 20, 2001 at 01:31:49AM -0500
References: <20010319084534.A18938@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCCENJJGAA.tim_one@email.msn.com>
Message-ID: <20010321214432.A25810@glacier.fnational.com>

[Tim on comparing fringes of two trees]:
> In Icon you need to create co-expressions to solve this
> problem, because its generators aren't explicitly resumable,
> and Icon has no way to spell "kick a pair of generators in
> lockstep".  But explicitly resumable generators are in fact
> "good enough" for this classic example, which is usually used
> to motivate coroutines.

Apparently they are good for lots of other things too.  Tonight I
implemented passing values using resume().  Next, I decided to
see if I had enough magic juice to tackle the coroutine example
from Gordon's stackless tutorial.  Its turns out that I didn't
need the extra functionality.  Generators are enough.

The code is not too long so I've attached it.  I figure that some
people might need a break from 2.1 release issues.  I think the
generator version is even simpler than the coroutine version.

  Neil

# Generator example:
# The program is a variation of a Simula 67 program due to Dahl & Hoare,
# who in turn credit the original example to Conway.
#
# We have a number of input lines, terminated by a 0 byte.  The problem
# is to squash them together into output lines containing 72 characters
# each.  A semicolon must be added between input lines.  Runs of blanks
# and tabs in input lines must be squashed into single blanks.
# Occurrences of "**" in input lines must be replaced by "^".
#
# Here's a test case:

test = """\
   d    =   sqrt(b**2  -  4*a*c)
twoa    =   2*a
   L    =   -b/twoa
   R    =   d/twoa
  A1    =   L + R
  A2    =   L - R\0
"""

# The program should print:
# d = sqrt(b^2 - 4*a*c);twoa = 2*a; L = -b/twoa; R = d/twoa; A1 = L + R;
#A2 = L - R
#done
# getlines: delivers the input lines
# disassemble: takes input line and delivers them one
#    character at a time, also inserting a semicolon into
#    the stream between lines
# squasher:  takes characters and passes them on, first replacing
#    "**" with "^" and squashing runs of whitespace
# assembler: takes characters and packs them into lines with 72
#    character each; when it sees a null byte, passes the last
#    line to putline and then kills all the coroutines

from Generator import Generator

def getlines(text):
    g = Generator()
    for line in text.split('\n'):
        g.suspend(line)
    g.end()

def disassemble(cards):
    g = Generator()
    try:
        for card in cards:
            for i in range(len(card)):
                if card[i] == '\0':
                    raise EOFError 
                g.suspend(card[i])
            g.suspend(';')
    except EOFError:
        pass
    while 1:
        g.suspend('') # infinite stream, handy for squash()

def squash(chars):
    g = Generator()
    while 1:
        c = chars.next()
        if not c:
            break
        if c == '*':
            c2 = chars.next()
            if c2 == '*':
                c = '^'
            else:
                g.suspend(c)
                c = c2
        if c in ' \t':
            while 1:
                c2 = chars.next()
                if c2 not in ' \t':
                    break
            g.suspend(' ')
            c = c2
        if c == '\0':
            g.end()
        g.suspend(c)
    g.end()

def assemble(chars):
    g = Generator()
    line = ''
    for c in chars:
        if c == '\0':
            g.end()
        if len(line) == 72:
            g.suspend(line)
            line = ''
        line = line + c
    line = line + ' '*(72 - len(line))
    g.suspend(line)
    g.end()


if __name__ == '__main__':
    for line in assemble(squash(disassemble(getlines(test)))):
        print line
    print 'done'

        



From cce at clarkevans.com  Thu Mar 22 11:14:25 2001
From: cce at clarkevans.com (Clark C. Evans)
Date: Thu, 22 Mar 2001 05:14:25 -0500 (EST)
Subject: [Python-Dev] Re: PEP 1, PEP Purpose and Guidelines
In-Reply-To: <15032.59269.4520.961715@anthem.wooz.org>
Message-ID: <Pine.LNX.4.21.0103220504280.18700-100000@clarkevans.com>

Barry,

  If you don't mind, I'd like to apply for one of them
  there PEP numbers.  Sorry for not following the guidelines,
  it won't happen again.

  Also, I believe that this isn't just my work, but rather
  a first pass at concensus on this issue via the vocal and
  silent feeback from those on the main and type special
  interest group.  I hope that I have done their ideas
  and feedback justice (if not, I'm sure I'll hear about it).

Thank you so much,

Clark

...

PEP: XXX
Title: Protocol Checking and Adaptation
Version: $Revision$
Author: Clark Evans
Python-Version: 2.2
Status: Draft
Type: Standards Track
Created: 21-Mar-2001
Updated: 23-Mar-2001

Abstract

    This proposal puts forth a built-in, explicit method for
    the adaptation (including verification) of an object to a 
    context where a specific type, class, interface, or other 
    protocol is expected.  This proposal can leverage existing
    protocols such as the type system and class hierarchy and is
    orthogonal, if not complementary to the pending interface
    mechanism [1] and signature based type-checking system [2]

    This proposal allows an object to answer two questions.  First,
    are you a such and such?  Meaning, does this object have a 
    particular required behavior?  And second, if not, can you give
    me a handle which is?  Meaning, can the object construct an 
    appropriate wrapper object which can provide compliance with
    the protocol expected.  This proposal does not limit what 
    such and such (the protocol) is or what compliance to that
    protocol means, and it allows other query/adapter techniques 
    to be added later and utilized through the same interface 
    and infrastructure introduced here.

Motivation

    Currently there is no standardized mechanism in Python for 
    asking if an object supports a particular protocol. Typically,
    existence of particular methods, particularly those that are 
    built-in such as __getitem__, is used as an indicator of 
    support for a particular protocol.  This technique works for 
    protocols blessed by GvR, such as the new enumerator proposal
    identified by a new built-in __iter__.  However, this technique
    does not admit an infallible way to identify interfaces lacking 
    a unique, built-in signature method.

    More so, there is no standardized way to obtain an adapter 
    for an object.  Typically, with objects passed to a context
    expecting a particular protocol, either the object knows about 
    the context and provides its own wrapper or the context knows 
    about the object and automatically wraps it appropriately.  The 
    problem with this approach is that such adaptations are one-offs,
    are not centralized in a single place of the users code, and 
    are not executed with a common technique, etc.  This lack of
    standardization increases code duplication with the same 
    adapter occurring in more than one place or it encourages 
    classes to be re-written instead of adapted.  In both cases,
    maintainability suffers.

    In the recent type special interest group discussion [3], there
    were two complementary quotes which motivated this proposal:

       "The deep(er) part is whether the object passed in thinks of
        itself as implementing the Foo interface. This means that
        its author has (presumably) spent at least a little time
        about the invariants that a Foo should obey."  GvR [4]

    and

       "There is no concept of asking an object which interface it
        implements. There is no "the" interface it implements. It's
        not even a set of interfaces, because the object doesn't 
        know them in advance. Interfaces can be defined after objects
        conforming to them are created." -- Marcin Kowalczyk [5]

    The first quote focuses on the intent of a class, including 
    not only the existence of particular methods, but more 
    importantly the call sequence, behavior, and other invariants.
    Where the second quote focuses on the type signature of the
    class.  These quotes highlight a distinction between interface
    as a "declarative, I am a such-and-such" construct, as opposed
    to a "descriptive, It looks like a such-and-such" mechanism.

    Four positive cases for code-reuse include:

     a) It is obvious object has the same protocol that
        the context expects.  This occurs when the type or
        class expected happens to be the type of the object
        or class.  This is the simple and easiest case.

     b) When the object knows about the protocol that the
        context requires and knows how to adapt itself 
        appropriately.  Perhaps it already has the methods
        required, or it can make an appropriate wrapper

     c) When the protocol knows about the object and can
        adapt it on behalf of the context.  This is often
        the case with backwards-compatibility cases.

     d) When the context knows about the object and the 
        protocol and knows how to adapt the object so that
        the required protocol is satisfied.

    This proposal should allow each of these cases to be handled,
    however, the proposal only concentrates on the first two cases;
    leaving the latter two cases where the protocol adapts the 
    object and where the context adapts the object to other proposals.
    Furthermore, this proposal attempts to enable these four cases
    in a manner completely neutral to type checking or interface
    declaration and enforcement proposals.  

Specification

    For the purposes of this specification, let the word protocol
    signify any current or future method of stating requirements of 
    an object be it through type checking, class membership, interface 
    examination, explicit types, etc.  Also let the word compliance
    be dependent and defined by each specific protocol.

    This proposal initially supports one initial protocol, the
    type/class membership as defined by isinstance(object,protocol)
    Other types of protocols, such as interfaces can be added through
    another proposal without loss of generality of this proposal.  
    This proposal attempts to keep the first set of protocols small
    and relatively unobjectionable.

    This proposal would introduce a new binary operator "isa".
    The left hand side of this operator is the object to be checked
    ("self"), and the right hand side is the protocol to check this
    object against ("protocol").  The return value of the operator 
    will be either the left hand side if the object complies with 
    the protocol or None.

    Given an object and a protocol, the adaptation of the object is:
     a) self, if the object is already compliant with the protocol,
     b) a secondary object ("wrapper"), which provides a view of the
        object compliant with the protocol.  This is explicitly 
        vague, and wrappers are allowed to maintain their own 
        state as necessary.
     c) None, if the protocol is not understood, or if object 
        cannot be verified compliant with the protocol and/or
        if an appropriate wrapper cannot be constructed.

    Further, a new built-in function, adapt, is introduced.  This
    function takes two arguments, the object being adapted ("obj") 
    and the protocol requested of the object ("protocol").  This
    function returns the adaptation of the object for the protocol,
    either self, a wrapper, or None depending upon the circumstances.
    None may be returned if adapt does not understand the protocol,
    or if adapt cannot verify compliance or create a wrapper.

    For this machinery to work, two other components are required.
    First is a private, shared implementation of the adapt function
    and isa operator.  This private routine will have three 
    arguments: the object being adapted ("self"), the protocol 
    requested ("protocol"), and a flag ("can_wrap").  The flag
    specifies if the adaptation may be a wrapper, if the flag is not
    set, then the adaptation may only be self or None.  This flag is
    required to support the isa operator.  The obvious case 
    mentioned in the motivation, where the object easily complies 
    with the protocol, is implemented in this private routine.  

    To enable the second case mentioned in the motivation, when 
    the object knows about the protocol, a new method slot, __adapt__
    on each object is required.  This optional slot takes three
    arguments, the object being adapted ("self"), the protocol 
    requested ("protocol"), and a flag ("can_wrap").  And, like 
    the other functions, must return an adaptation, be it self, a
    wrapper if allowed, or None.  This method slot allows a class 
    to declare which protocols it supports in addition to those 
    which are part of the obvious case.

    This slot is called first before the obvious cases are examined, 
    if None is returned then the default processing proceeds.  If the
    default processing is wrong, then the AdaptForceNoneException
    can be thrown.  The private routine will catch this specific 
    exception and return None in this case.  This technique allows an
    class to subclass another class, but yet catch the cases where 
    it is considered as a substitutable for the base class.  Since 
    this is the exception, rather than the normal case, an exception 
    is warranted and is used to pass this information along.  The 
    caller of adapt or isa will be unaware of this particular exception
    as the private routine will return None in this particular case.

    Please note two important things.  First, this proposal does not
    preclude the addition of other protocols.  Second, this proposal 
    does not preclude other possible cases where adapter pattern may
    hold, such as the protocol knowing the object or the context 
    knowing the object and the protocol (cases c and d in the 
    motivation).  In fact, this proposal opens the gate for these 
    other mechanisms to be added; while keeping the change in 
    manageable chunks.

Reference Implementation and Example Usage

    -----------------------------------------------------------------
    adapter.py
    -----------------------------------------------------------------
        import types
        AdaptForceNoneException = "(private error for adapt and isa)"

        def interal_adapt(obj,protocol,can_wrap):

            # the obj may have the answer, so ask it about the ident
            adapt = getattr(obj, '__adapt__',None)
            if adapt:
                try:
                    retval = adapt(protocol,can_wrap)
                    # todo: if not can_wrap check retval for None or obj
                except AdaptForceNoneException:
                    return None
                if retval: return retval

            # the protocol may have the answer, so ask it about the obj
            pass

            # the context may have the answer, so ask it about the
            pass

            # check to see if the current object is ok as is
            if type(protocol) is types.TypeType or \
               type(protocol) is types.ClassType:
                if isinstance(obj,protocol):
                    return obj

            # ok... nothing matched, so return None
            return None

        def adapt(obj,protocol):
            return interal_adapt(obj,protocol,1)

        # imagine binary operator syntax
        def isa(obj,protocol):
            return interal_adapt(obj,protocol,0)

    -----------------------------------------------------------------
    test.py
    -----------------------------------------------------------------
        from adapter import adapt
        from adapter import isa
        from adapter import AdaptForceNoneException

        class KnightsWhoSayNi: pass  # shrubbry troubles

        class EggsOnly:  # an unrelated class/interface
            def eggs(self,str): print "eggs!" + str

        class HamOnly:  # used as an interface, no inhertance
            def ham(self,str): pass
            def _bugger(self): pass  # irritating a private member

        class SpamOnly: # a base class, inheritance used
            def spam(self,str): print "spam!" + str

        class EggsSpamAndHam (SpamOnly,KnightsWhoSayNi):
            def ham(self,str): print "ham!" + str
            def __adapt__(self,protocol,can_wrap):
                if protocol is HamOnly:
                    # implements HamOnly implicitly, no _bugger
                    return self
                if protocol is KnightsWhoSayNi:
                    # we are no longer the Knights who say Ni!
                    raise AdaptForceNoneException
                if protocol is EggsOnly and can_wrap:
                    # Knows how to create the eggs!
                    return EggsOnly()

        def test():
            x = EggsSpamAndHam()
            adapt(x,SpamOnly).spam("Ni!")
            adapt(x,EggsOnly).eggs("Ni!")
            adapt(x,HamOnly).ham("Ni!")
            adapt(x,EggsSpamAndHam).ham("Ni!")
            if None is adapt(x,KnightsWhoSayNi): print "IckIcky...!"
            if isa(x,SpamOnly): print "SpamOnly"
            if isa(x,EggsOnly): print "EggsOnly"
            if isa(x,HamOnly): print "HamOnly"
            if isa(x,EggsSpamAndHam): print "EggsAndSpam"
            if isa(x,KnightsWhoSayNi): print "NightsWhoSayNi"

    -----------------------------------------------------------------
    Example Run
    -----------------------------------------------------------------
        >>> import test
        >>> test.test()
        spam!Ni!
        eggs!Ni!
        ham!Ni!
        ham!Ni!
        IckIcky...!
        SpamOnly
        HamOnly
        EggsAndSpam

Relationship To Paul Prescod and Tim Hochbergs Type Assertion method

    The example syntax Paul put forth recently [2] was:

        interface Interface
            def __check__(self,obj)

    Pauls proposal adds the checking part to the third (3)
    case described in motiviation, when the protocol knows
    about the object.  As stated, this could be easily added
    as a step in the interal_adapt function:

            # the protocol may have the answer, so ask it about the obj

                if typ is types.Interface:
                    if typ__check__(obj):
                        return obj

    Further, and quite excitingly, if the syntax for this type 
    based assertion added an extra argument, "can_wrap", then this
    mechanism could be overloaded to also provide adapters to
    objects that the interface knows about.

    In short, the work put forth by Paul and company is great, and
    I dont see any problems why these two proposals couldnt work
    together in harmony, if not be completely complementary.

Relationship to Python Interfaces [1] by Michel Pelletier

    The relationship to this proposal is a bit less clear 
    to me, although an implements(obj,anInterface) built-in
    function was mentioned.  Thus, this could be added naively
    as a step in the interal_adapt function:

        if typ is types.Interface:
            if implements(obj,protocol):
                return obj

    However, there is a clear concern here.  Due to the 
    tight semantics being described in this specification,
    it is clear the isa operator proposed would have to have 
    a 1-1 correspondence with implements function, when the
    type of protocol is an Interface.  Thus, when can_wrap is
    true, __adapt__ may be called, however, it is clear that
    the return value would have to be double-checked.  Thus, 
    a more realistic change would be more like:

        def internal_interface_adapt(obj,interface)
            if implements(obj,interface):
                return obj
            else
                return None

        def interal_adapt(obj,protocol,can_wrap):

            # the obj may have the answer, so ask it about the ident
            adapt = getattr(obj, '__adapt__',None)
            if adapt:
                try:
                    retval = adapt(protocol,can_wrap)
                except AdaptForceNoneException:
                    if type(protocol) is types.Interface:
                        return internal_interface_adapt(obj,protocol)
                    else:
                        return None
                if retval: 
                    if type(protocol) is types.Interface:
                        if can_wrap and implements(retval,protocol):
                            return retval
                        return internal_interface_adapt(obj,protocol)
                    else:
                        return retval

            if type(protocol) is types.Interface:
                return internal_interface_adapt(obj,protocol)

            # remainder of function... 

    It is significantly more complicated, but doable.

Relationship To Iterator Proposal:
 
    The iterator special interest group is proposing a new built-in
    called "__iter__", which could be replaced with __adapt__ if an
    an Interator class is introduced.  Following is an example.

        class Iterator:
            def next(self):
                raise IndexError

        class IteratorTest:
            def __init__(self,max):
                self.max = max
            def __adapt__(self,protocol,can_wrap):
                if protocol is Iterator and can_wrap:
                    class IteratorTestIterator(Iterator):
                        def __init__(self,max):
                            self.max = max
                            self.count = 0
                        def next(self):
                            self.count = self.count + 1
                            if self.count < self.max:
                              return self.count
                            return Iterator.next(self)
                    return IteratorTestIterator(self.max)

Relationships To Microsofts Query Interface:

    Although this proposal may sounds similar to Microsofts 
    QueryInterface, it differs by a number of aspects.  First, 
    there is not a special "IUnknown" interface which can be used
    for object identity, although this could be proposed as one
    of those "special" blessed interface protocol identifiers.
    Second, with QueryInterface, once an object supports a particular
    interface it must always there after support this interface; 
    this proposal makes no such guarantee, although this may be 
    added at a later time. Third, implementations of Microsofts
    QueryInterface must support a kind of equivalence relation. 
    By reflexive they mean the querying an interface for itself 
    must always succeed.  By symmetrical they mean that if one 
    can successfully query an interface IA for a second interface 
    IB, then one must also be able to successfully query the 
    interface IB for IA.  And finally, by transitive they mean if 
    one can successfully query IA for IB and one can successfully
    query IB for IC, then one must be able to successfully query 
    IA for IC.  Ability to support this type of equivalence relation
    should be encouraged, but may not be possible.  Further research 
    on this topic (by someone familiar with Microsoft COM) would be
    helpful in further determining how compatible this proposal is.

Backwards Compatibility

    There should be no problem with backwards compatibility.  
    Indeed this proposal, save an built-in adapt() function, 
    could be tested without changes to the interpreter.

Questions and Answers

    Q:  Why was the name changed from __query__ to __adapt__ ?  

    A:  It was clear that significant QueryInterface assumptions were
        being laid upon the proposal, when the intent was more of an 
        adapter.  Of course, if an object does not need to be adapted
        then it can be used directly and this is the basic premise.

    Q:  Why is the checking mechansim mixed with the adapter
        mechanism.

    A:  Good question.  They could be seperated, however, there
        is significant overlap, if you consider the checking
        protocol as returning a compliant object (self) or
        not a compliant object (None).  In this way, adapting
        becomes a special case of checking, via the can_wrap.

        Really, this could be seperated out, but the two 
        concepts are very related so much duplicate work
        would be done, and the overall mechanism would feel
        quite a bit less unified.

    Q:  This is just a type-coercion proposal.

    A:  No. Certainly it could be used for type-coercion, such
        coercion would be explicit via __adapt__ or adapt function. 
        Of course, if this was used for iterator interface, then the
        for construct may do an implicit __adapt__(Iterator) but
        this would be an exception rather than the rule.

    Q:  Why did the author write this PEP?

    A:  He wanted a simple proposal that covered the "deep part" of
        interfaces without getting tied up in signature woes.  Also, it
        was clear that __iter__ proposal put forth is just an example
        of this type of interface.  Further, the author is doing XML 
        based client server work, and wants to write generic tree based
        algorithms that work on particular interfaces and would
        like these algorithms to be used by anyone willing to make
        an "adapter" having the interface required by the algorithm.

    Q:  Is this in opposition to the type special interest group?

    A:  No.  It is meant as a simple, need based solution that could
        easily complement the efforts by that group.

    Q:  Why was the identifier changed from a string to a class?

    A:  This was done on Michel Pelletiers suggestion.  This mechanism
        appears to be much cleaner than the DNS string proposal, which 
        caused a few eyebrows to rise.  

    Q:  Why not handle the case where instances are used to identify 
        protocols?  In other words, 6 isa 6 (where the 6 on the right
        is promoted to an types.Int

    A:  Sounds like someone might object, lets keep this in a
        separate proposal.

    Q:  Why not let obj isa obj be true?  or class isa baseclass?

    A:  Sounds like someone might object, lets keep this in a
        separate proposal.

    Q:  It seems that a reverse lookup could be used, why not add this?

    A:  There are many other lookup and/or checking mechanisms that
        could be used here.  However, the goal of this PEP is to be 
        small and sweet ... having any more functionality would make
        it more objectionable to some people.  However, this proposal
        was designed in large part to be completely orthogonal to other
        methods, so these mechanisms can be added later if needed

Credits

    This proposal was created in large part by the feedback 
    of the talented individuals on both the main mailing list
    and also the type signature list.  Specific contributors
    include (sorry if I missed someone).

        Robin Thomas, Paul Prescod, Michel Pelletier, 
        Alex Martelli, Jeremy Hylton, Carlos Ribeiro,
        Aahz Maruch, Fredrik Lundh, Rainer Deyke,
        Timothy Delaney, and Huaiyu Zhu

Copyright

    This document has been placed in the public domain.


References and Footnotes

    [1] http://python.sourceforge.net/peps/pep-0245.html
    [2] http://mail.python.org/pipermail/types-sig/2001-March/001223.html
    [3] http://www.zope.org/Members/michel/types-sig/TreasureTrove
    [4] http://mail.python.org/pipermail/types-sig/2001-March/001105.html
    [5] http://mail.python.org/pipermail/types-sig/2001-March/001206.html
    [6] http://mail.python.org/pipermail/types-sig/2001-March/001223.html





From thomas at xs4all.net  Thu Mar 22 12:14:48 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 12:14:48 +0100
Subject: Generator syntax (Re: FW: FW: [Python-Dev] Simple generator implementation)
In-Reply-To: <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>; from greg@cosc.canterbury.ac.nz on Thu, Mar 22, 2001 at 01:39:05PM +1200
References: <012601c0b1d8$7dc3cc50$e46940d5@hagrid> <200103220139.NAA06587@s454.cosc.canterbury.ac.nz>
Message-ID: <20010322121448.T29286@xs4all.nl>

On Thu, Mar 22, 2001 at 01:39:05PM +1200, Greg Ewing wrote:
> Fredrik Lundh <fredrik at effbot.org>:

> > I associate "yield" with non-preemptive threading (yield
> > to anyone else, not necessarily my caller).

> Well, this flavour of generators is sort of a special case
> subset of non-preemptive threading, so the usage is not
> entirely inconsistent.

I prefer yield, but I'll yield to suspend as long as we get coroutines or
suspendable frames so I can finish my Python-embedded MUX with
task-switching Python code :-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Thu Mar 22 14:51:16 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 08:51:16 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: Your message of "Wed, 21 Mar 2001 22:16:30 EST."
             <15033.28302.876972.730118@anthem.wooz.org> 
References: <3AB87C4E.450723C2@lemburg.com> <200103220137.NAA06583@s454.cosc.canterbury.ac.nz>  
            <15033.28302.876972.730118@anthem.wooz.org> 
Message-ID: <200103221351.IAA25632@cj20424-a.reston1.va.home.com>

> >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:
> 
>     GE> Wouldn't it be better to fix these functions on the Mac
>     GE> instead of depriving everyone else of them?
> 
> Either way, shutil sure is useful!

Yes, but deceptively so.  What should we do?  Anyway, it doesn't
appear to be officially deprecated yet (can't see it in the docs) and
I think it may be best to keep it that way.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From pf at artcom-gmbh.de  Thu Mar 22 15:17:46 2001
From: pf at artcom-gmbh.de (Peter Funk)
Date: Thu, 22 Mar 2001 15:17:46 +0100 (MET)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103221351.IAA25632@cj20424-a.reston1.va.home.com> from Guido van Rossum at "Mar 22, 2001  8:51:16 am"
Message-ID: <m14g5uN-000CnEC@artcom0.artcom-gmbh.de>

Hi,

Guido van Rossum schrieb:
> > >>>>> "GE" == Greg Ewing <greg at cosc.canterbury.ac.nz> writes:
> > 
> >     GE> Wouldn't it be better to fix these functions on the Mac
> >     GE> instead of depriving everyone else of them?
> > 
> > Either way, shutil sure is useful!
> 
> Yes, but deceptively so.  What should we do?  Anyway, it doesn't
> appear to be officially deprecated yet (can't see it in the docs) and
> I think it may be best to keep it that way.

A very simple idea would be, to provide two callback hooks,
which will be invoked by each call to copyfile or remove.

Example:  Someone uses the package netatalk on Linux to provide file
services to Macs.  netatalk stores the resource forks in hidden sub
directories called .AppleDouble.  The callback function could than
copy the .AppleDouble/files around using shutil.copyfile itself.

Regards, Peter




From fredrik at effbot.org  Thu Mar 22 15:37:59 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Thu, 22 Mar 2001 15:37:59 +0100
Subject: [Python-Dev] booted from sourceforge
Message-ID: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>

attempts to access the python project, the tracker (etc) results in:

    You don't have permission to access <whatever> on this server.

is it just me?

Cheers /F




From thomas at xs4all.net  Thu Mar 22 15:44:29 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 15:44:29 +0100
Subject: [Python-Dev] tests failing on irix65
In-Reply-To: <15032.57243.391141.409534@mace.lucasdigital.com>; from tommy@ilm.com on Wed, Mar 21, 2001 at 09:08:49AM -0800
References: <15032.22504.605383.113425@mace.lucasdigital.com> <20010321140704.R29286@xs4all.nl> <15032.57243.391141.409534@mace.lucasdigital.com>
Message-ID: <20010322154429.W27808@xs4all.nl>

On Wed, Mar 21, 2001 at 09:08:49AM -0800, Flying Cougar Burnette wrote:

> with these changes to test_pty.py I now get:

> test_pty
> The actual stdout doesn't match the expected stdout.
> This much did match (between asterisk lines):
> **********************************************************************
> test_pty
> **********************************************************************
> Then ...
> We expected (repr): 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
> But instead we got: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n'
> test test_pty failed -- Writing: 'I wish to buy a fish license.\r\nFor my pet fish, Eric.\r\n', expected: 'I wish to buy a fish license.\nFor my pet fish, Eric.\n'
> 
> but when I import test.test_pty that blank line is gone.  Sounds like
> the test verification just needs to be a bit more flexible, maybe?

Yes... I'll explicitly turn \r\n into \n (at the end of the string) so the
test can still use the normal print/stdout-checking routines (mostly because
I want to avoid doing the error reporting myself) but it would still barf if
the read strings contain other trailing garbage or extra whitespace and
such.

I'll check in a new version in a few minutes.. Let me know if it still has
problems.

> test_openpty passes without a problem, BTW.

Good... so at least that works ;-)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From thomas at xs4all.net  Thu Mar 22 15:45:57 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 15:45:57 +0100
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>; from fredrik@effbot.org on Thu, Mar 22, 2001 at 03:37:59PM +0100
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
Message-ID: <20010322154557.A13066@xs4all.nl>

On Thu, Mar 22, 2001 at 03:37:59PM +0100, Fredrik Lundh wrote:
> attempts to access the python project, the tracker (etc) results in:

>     You don't have permission to access <whatever> on this server.

> is it just me?

I noticed this yesterday as well, but only for a few minutes. I wasn't on SF
for long, though, so I might have hit it again if I'd tried once more. I
suspect they are/were commissioning a new (set of) webserver(s) in the pool,
and they screwed up the permissions.

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From guido at digicool.com  Thu Mar 22 15:55:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 09:55:37 -0500
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: Your message of "Thu, 22 Mar 2001 15:37:59 +0100."
             <000f01c0b2dd$b477a3b0$e46940d5@hagrid> 
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid> 
Message-ID: <200103221455.JAA25875@cj20424-a.reston1.va.home.com>

> attempts to access the python project, the tracker (etc) results in:
> 
>     You don't have permission to access <whatever> on this server.
> 
> is it just me?
> 
> Cheers /F

No, it's SF.  From their most recent mailing (this morning!) to the
customer:

"""The good news is, it is unlikely SourceForge.net will have any
power related downtime.  In December we moved the site to Exodus, and
they have amble backup power systems to deal with the on going
blackouts."""

So my expectation that it's a power failure -- system folks are
notoriously optimistic about the likelihood of failures... :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)




From fdrake at acm.org  Thu Mar 22 15:57:47 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Thu, 22 Mar 2001 09:57:47 -0500 (EST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103221351.IAA25632@cj20424-a.reston1.va.home.com>
References: <3AB87C4E.450723C2@lemburg.com>
	<200103220137.NAA06583@s454.cosc.canterbury.ac.nz>
	<15033.28302.876972.730118@anthem.wooz.org>
	<200103221351.IAA25632@cj20424-a.reston1.va.home.com>
Message-ID: <15034.4843.674513.237570@localhost.localdomain>

Guido van Rossum writes:
 > Yes, but deceptively so.  What should we do?  Anyway, it doesn't
 > appear to be officially deprecated yet (can't see it in the docs) and
 > I think it may be best to keep it that way.

  I don't think it's deceived me yet!  I see no reason to deprecate
it, and I don't recall anyone telling me it should be.  Nor do I
recall a discussion here suggesting that it should be.
  If it has hidden corners that I just haven't run into (and it *has*
been pointed out that it does have corners, at least on some
platforms), why don't we just consider those bugs that can be fixed?


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From thomas at xs4all.net  Thu Mar 22 16:03:20 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Thu, 22 Mar 2001 16:03:20 +0100
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: <200103221455.JAA25875@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Thu, Mar 22, 2001 at 09:55:37AM -0500
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid> <200103221455.JAA25875@cj20424-a.reston1.va.home.com>
Message-ID: <20010322160320.B13066@xs4all.nl>

On Thu, Mar 22, 2001 at 09:55:37AM -0500, Guido van Rossum wrote:
> > attempts to access the python project, the tracker (etc) results in:
> > 
> >     You don't have permission to access <whatever> on this server.
> > 
> > is it just me?
> > 
> > Cheers /F

> [..] my expectation that it's a power failure -- system folks are
> notoriously optimistic about the likelihood of failures... :-)

It's quite uncommon for powerfailures to cause permission problems, though :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From mwh21 at cam.ac.uk  Thu Mar 22 16:18:58 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 22 Mar 2001 15:18:58 +0000
Subject: [Python-Dev] booted from sourceforge
In-Reply-To: "Fredrik Lundh"'s message of "Thu, 22 Mar 2001 15:37:59 +0100"
References: <000f01c0b2dd$b477a3b0$e46940d5@hagrid>
Message-ID: <m33dc5g7bx.fsf@atrus.jesus.cam.ac.uk>

"Fredrik Lundh" <fredrik at effbot.org> writes:

> attempts to access the python project, the tracker (etc) results in:
> 
>     You don't have permission to access <whatever> on this server.
> 
> is it just me?

I was getting this a lot yesterday.  Give it a minute, and try again -
worked for me, albeit somewhat tediously.

Cheers,
M.

-- 
  Just put the user directories on a 486 with deadrat7.1 and turn the
  Octane into the afforementioned beer fridge and keep it in your
  office. The lusers won't notice the difference, except that you're
  more cheery during office hours.              -- Pim van Riezen, asr




From gward at python.net  Thu Mar 22 17:50:43 2001
From: gward at python.net (Greg Ward)
Date: Thu, 22 Mar 2001 11:50:43 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: <034601c0b274$d8bab8c0$8119fea9@neil>; from nhodgson@bigpond.net.au on Thu, Mar 22, 2001 at 01:07:28PM +1100
References: <200103220137.NAA06583@s454.cosc.canterbury.ac.nz> <034601c0b274$d8bab8c0$8119fea9@neil>
Message-ID: <20010322115043.A5993@cthulhu.gerg.ca>

On 22 March 2001, Neil Hodgson said:
>    Then they should be fixed for Windows as well where they don't copy
> secondary forks either. While not used much by native code, forks are
> commonly used on NT servers which serve files to Macintoshes.
> 
>    There is also the issue of other metadata. Should shutil optionally copy
> ownership information? Access Control Lists? Summary information? A really
> well designed module here could be very useful but quite some work.

There's a pretty good 'copy_file()' routine in the Distutils; I found
shutil quite inadequate, so rolled my own.  Jack Jansen patched it so it
does the "right thing" on Mac OS.  By now, it has probably copied many
files all over the place on all of your computers, so it sounds like it
works.  ;-)

See the distutils.file_util module for implementation and documentation.

        Greg
-- 
Greg Ward - Unix bigot                                  gward at python.net
http://starship.python.net/~gward/
Sure, I'm paranoid... but am I paranoid ENOUGH?



From fredrik at pythonware.com  Thu Mar 22 18:09:49 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Thu, 22 Mar 2001 18:09:49 +0100
Subject: [Python-Dev] Function in os module for available disk space, why  not?
References: <3AB5F43D.E33B188D@darwin.in-berlin.de> <20010319130713.M29286@xs4all.nl> <3AB5F9D8.74F0B55F@darwin.in-berlin.de> <029401c0b075$3c18e2e0$0900a8c0@SPIFF> <3AB62EAE.FCFD7C9F@lemburg.com> <048401c0b172$dd6892a0$e46940d5@hagrid>
Message-ID: <01bd01c0b2f2$e8702fb0$e46940d5@hagrid>

> (and my plan is to make a statvfs subset available on
> all platforms, which makes your code even simpler...)

windows patch here:
http://sourceforge.net/tracker/index.php?func=detail&aid=410547&group_id=5470&atid=305470

guess it has to wait for 2.2, though...

Cheers /F




From greg at cosc.canterbury.ac.nz  Thu Mar 22 23:36:02 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Fri, 23 Mar 2001 10:36:02 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <m14g5uN-000CnEC@artcom0.artcom-gmbh.de>
Message-ID: <200103222236.KAA08215@s454.cosc.canterbury.ac.nz>

pf at artcom-gmbh.de (Peter Funk):

> netatalk stores the resource forks in hidden sub
> directories called .AppleDouble.

None of that is relevant if the copying is being done from
the Mac end. To the Mac it just looks like a normal Mac
file, so the standard Mac file-copying techniques will work.
No need for any callbacks.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From tommy at ilm.com  Fri Mar 23 00:03:29 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Thu, 22 Mar 2001 15:03:29 -0800 (PST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
Message-ID: <15034.33486.157946.686067@mace.lucasdigital.com>

Hey Folks,

When running an interactive interpreter python currently tries to
import "readline", ostensibly to make your interactive experience a
little easier (with history, extra keybindings, etc).  For a while now 
we python has also shipped with a standard module called "rlcompleter" 
which adds name completion to the readline functionality.

Can anyone think of a good reason why we don't import rlcompleter
instead of readline by default?  I can give you a good reason why it
*should*, but I'd rather not bore anyone with the details if I don't
have to.

All in favor, snag the following patch....


------------%< snip %<----------------------%< snip %<------------

Index: Modules/main.c
===================================================================
RCS file: /cvsroot/python/python/dist/src/Modules/main.c,v
retrieving revision 1.51
diff -r1.51 main.c
290c290
<               v = PyImport_ImportModule("readline");
---
>               v = PyImport_ImportModule("rlcompleter");



From pf at artcom-gmbh.de  Fri Mar 23 00:10:46 2001
From: pf at artcom-gmbh.de (Peter Funk)
Date: Fri, 23 Mar 2001 00:10:46 +0100 (MET)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <200103222236.KAA08215@s454.cosc.canterbury.ac.nz> from Greg Ewing at "Mar 23, 2001 10:36: 2 am"
Message-ID: <m14gEEA-000CnEC@artcom0.artcom-gmbh.de>

Hi,

> pf at artcom-gmbh.de (Peter Funk):
> > netatalk stores the resource forks in hidden sub
> > directories called .AppleDouble.

Greg Ewing:
> None of that is relevant if the copying is being done from
> the Mac end. To the Mac it just looks like a normal Mac
> file, so the standard Mac file-copying techniques will work.
> No need for any callbacks.

You are right and I know this.  But if you program an application,
which should work on the Unix/Linux side (for example a filemanager
or something similar), you have to pay attention to this files on
your own.  The same holds true for thumbnail images usually stored
in a .xvpics subdirectory.

All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
in this respect.

Regards, Peter
P.S.: I'm not going to write a GUI file manager in Python and using
shutil right now.  So this discussion is somewhat academic.
-- 
Peter Funk, Oldenburger Str.86, D-27777 Ganderkesee, Germany, Fax:+49 4222950260
office: +49 421 20419-0 (ArtCom GmbH, Grazer Str.8, D-28359 Bremen)




From tim.one at home.com  Fri Mar 23 04:03:03 2001
From: tim.one at home.com (Tim Peters)
Date: Thu, 22 Mar 2001 22:03:03 -0500
Subject: [Python-Dev] CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCIEKNJHAA.tim.one@home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCAELCJHAA.tim.one@home.com>

At work today, Guido and I both found lots of instabilities in current CVS
Python, under different flavors of Windows:  senseless errors in the test
suite, different behavior across runs, NULL-pointer errors in GC when running
under a debug-build Python, some kind of Windows "app error" alert box, and
weird complaints about missing attributes during Python shutdown.

Back at home, things *seem* much better, but I still get one of the errors I
saw at the office:  a NULL-pointer dereference in GC, using a debug-build
Python, in test_xmllib, while *compiling* xmllib.pyc (i.e., we're not
actually running the test yet, just compiling the module).  Alas, this does
not fail in isolation, it's only when a run of the whole test suite happens
to get to that point.  The error is in gc_list_remove, which is passed a node
whose left and right pointers are both NULL.

Only thing I know for sure is that it's not PyDict_Next's fault (I did a
quick run with *that* change commented out; made no difference).  That wasn't
just paranoia:  dict_traverse is two routines down the call stack when this
happens, and that uses PyDict_Next.

How's life on other platforms?  Anyone else ever build/test the debug Python?
Anyone have a hot efence/Insure raring to run?

not-picky-about-the-source-of-miracles-ly y'rs  - tim




From guido at digicool.com  Fri Mar 23 05:34:48 2001
From: guido at digicool.com (Guido van Rossum)
Date: Thu, 22 Mar 2001 23:34:48 -0500
Subject: [Python-Dev] Re: CVS Python is unstable
Message-ID: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>

Tim's problem can be reproduced in debug mode as follows (on Windows
as well as on Linux):

    import test.test_weakref
    import test.test_xmllib

Boom!  The debugger (on Windows) shows that it does in some GC code.

After backing out Fred's last change to _weakref.c, this works as
expected and I get no other problems.

So I propose to back out that change and be done with it.

Here's the CVS comment:

----------------------------
revision 1.8
date: 2001/03/22 18:05:30;  author: fdrake;  state: Exp;  lines: +1 -1

Inform the cycle-detector that the a weakref object no longer needs to be
tracked as soon as it is clear; this can decrease the number of roots for
the cycle detector sooner rather than later in applications which hold on
to weak references beyond the time of the invalidation.
----------------------------

And the diff, to be backed out:

*** _weakref.c	2001/02/27 18:36:56	1.7
--- _weakref.c	2001/03/22 18:05:30	1.8
***************
*** 59,64 ****
--- 59,65 ----
      if (self->wr_object != Py_None) {
          PyWeakReference **list = GET_WEAKREFS_LISTPTR(self->wr_object);
  
+         PyObject_GC_Fini((PyObject *)self);
          if (*list == self)
              *list = self->wr_next;
          self->wr_object = Py_None;
***************
*** 78,84 ****
  weakref_dealloc(PyWeakReference *self)
  {
      clear_weakref(self);
-     PyObject_GC_Fini((PyObject *)self);
      self->wr_next = free_list;
      free_list = self;
  }
--- 79,84 ----

Fred, can you explain what the intention of this code was?

It's not impossible that the bug is actually in the debug mode macros,
but I'd rather not ship code that's instable in debug mode -- that
defeats the purpose.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Fri Mar 23 06:10:33 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 00:10:33 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>

[Guido]
> It's not impossible that the bug is actually in the debug mode macros,
> but I'd rather not ship code that's instable in debug mode -- that
> defeats the purpose.

I *suspect* the difference wrt debug mode is right where it's blowing up:

static void
gc_list_remove(PyGC_Head *node)
{
	node->gc_prev->gc_next = node->gc_next;
	node->gc_next->gc_prev = node->gc_prev;
#ifdef Py_DEBUG
	node->gc_prev = NULL;
	node->gc_next = NULL;
#endif
}

That is, in debug mode, the prev and next fields are nulled out, but not in
release mode.

Whenever this thing dies, the node passed in has prev and next fields that
*are* nulled out.  Since under MS debug mode, freed memory is set to a very
distinctive non-null bit pattern, this tells me that-- most likely --some
single node is getting passed to gc_list_remove *twice*.

I bet that's happening in release mode too ... hang on a second ... yup!  If
I remove the #ifdef above, then the pair test_weakref test_xmllib dies with a
null-pointer error here under the release build too.

and-that-ain't-good-ly y'rs  - tim




From tim.one at home.com  Fri Mar 23 06:56:05 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 00:56:05 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGELNJHAA.tim.one@home.com>

More info on the debug-mode

    test_weakref test_xmllib

blowup in gc_list_append, and with the .pyc files already there.

While running test_weakref, we call collect() once.

Ditto while running test_xmllib:  that's when it blows up.

collect_generations() is here (***):

	else {
		generation = 0;
		collections0++;
		if (generation0.gc_next != &generation0) {
***			n = collect(&generation0, &generation1);
		}
	}

collect() is here:

	gc_list_init(&reachable);
	move_roots(young, &reachable);
***	move_root_reachable(&reachable);

move_root_reachable is here:

***		(void) traverse(op,
			       (visitproc)visit_reachable,
			       (void *)reachable);

And that's really calling dict_traverse, which is iterating over the dict.

At blowup time, the dict key is of PyString_Type, with value "ref3", and so
presumably left over from test_weakref.  The dict value is of
PyWeakProxy_Type, has a refcount of 2, and has

    wr_object   pointing to Py_NoneStruct
    wr_callback NULL
    hash        0xffffffff
    wr_prev     NULL
    wr_next     NULL

It's dying while calling visit() (really visit_reachable) on the latter.

Inside visit_reachable, we have:

		if (gc && gc->gc_refs != GC_MOVED) {

and that's interesting too, because gc->gc_refs is 0xcdcdcdcd, which is the
MS debug-mode "clean landfill" value:  freshly malloc'ed memory is filled
with 0xcd bytes (so gc->gc_refs is uninitialized trash).

My conclusion:  it's really hosed.  Take it away, Neil <wink>!




From tim.one at home.com  Fri Mar 23 07:19:19 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 01:19:19 -0500
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>

> So I propose to back out that change and be done with it.

I just did revert the change (rev 1.8 of _weakref.c, back to 1.7), so anyone
interested in pursuing the details should NOT update.

There's another reason for not updating then:  the problem "went away" after
the next big pile of checkins, even before I reverted the change.  I assume
that's simply because things got jiggled enough so that we no longer hit
exactly the right sequence of internal operations.




From fdrake at acm.org  Fri Mar 23 07:50:21 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 01:50:21 -0500 (EST)
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > That is, in debug mode, the prev and next fields are nulled out, but not in
 > release mode.
 > 
 > Whenever this thing dies, the node passed in has prev and next fields that
 > *are* nulled out.  Since under MS debug mode, freed memory is set to a very
 > distinctive non-null bit pattern, this tells me that-- most likely --some
 > single node is getting passed to gc_list_remove *twice*.
 > 
 > I bet that's happening in release mode too ... hang on a second ... yup!  If
 > I remove the #ifdef above, then the pair test_weakref test_xmllib dies with a
 > null-pointer error here under the release build too.

  Ok, I've been trying to keep up with all this, and playing with some
alternate patches.  The change that's been identified as causing the
problem was trying to remove the weak ref from the cycle detectors set
of known containers as soon as the ref object was no longer a
container.  When this is done by the tp_clear handler may be the
problem; the GC machinery is removing the object from the list, and
calls gc_list_remove() assuming that the object is still in the list,
but after the tp_clear handler has been called.
  I see a couple of options:

  - Document the restriction that PyObject_GC_Fini() should not be
    called on an object while it's tp_clear handler is active (more
    efficient), -or-
  - Remove the restriction (safer).

  If we take the former route, I think it is still worth removing the
weakref object from the GC list as soon as it has been cleared, in
order to keep the number of containers the GC machinery has to inspect
at a minimum.  This can be done by adding a flag to
weakref.c:clear_weakref() indicating that the object's tp_clear is
active.  The extra flag would not be needed if we took the second
option.
  Another possibility, if I do adjust the code to remove the weakref
objects from the GC list aggressively, is to only call
PyObject_GC_Init() if the weakref actually has a callback -- if there
is no callback, the weakref object does not act as a container to
begin with.
  (It is also possible that with agressive removal of the weakref
object from the set of containers, it doesn't need to implement the
tp_clear handler at all, in which case this gets just a little bit
nicer.)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From nas at arctrix.com  Fri Mar 23 14:41:02 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 05:41:02 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 01:19:19AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCKELOJHAA.tim.one@home.com>
Message-ID: <20010323054102.A28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 01:19:19AM -0500, Tim Peters wrote:
> There's another reason for not updating then:  the problem "went away" after
> the next big pile of checkins, even before I reverted the change.  I assume
> that's simply because things got jiggled enough so that we no longer hit
> exactly the right sequence of internal operations.

Yes.

  Neil



From nas at arctrix.com  Fri Mar 23 14:47:40 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 05:47:40 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 12:10:33AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
Message-ID: <20010323054740.B28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 12:10:33AM -0500, Tim Peters wrote:
> I *suspect* the difference wrt debug mode is right where it's blowing up:
> 
> static void
> gc_list_remove(PyGC_Head *node)
> {
> 	node->gc_prev->gc_next = node->gc_next;
> 	node->gc_next->gc_prev = node->gc_prev;
> #ifdef Py_DEBUG
> 	node->gc_prev = NULL;
> 	node->gc_next = NULL;
> #endif
> }

PyObject_GC_Fini() should not be called twice on the same object
unless there is a PyObject_GC_Init() in between.  I suspect that
Fred's change made this happen.  When Py_DEBUG is not defined the
GC will do all sorts of strange things if you do this, hence the
debugging code.

  Neil



From nas at arctrix.com  Fri Mar 23 15:08:24 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 06:08:24 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>; from fdrake@acm.org on Fri, Mar 23, 2001 at 01:50:21AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com> <15034.61997.299305.456415@cj42289-a.reston1.va.home.com>
Message-ID: <20010323060824.C28875@glacier.fnational.com>

On Fri, Mar 23, 2001 at 01:50:21AM -0500, Fred L. Drake, Jr. wrote:
> The change that's been identified as causing the problem was
> trying to remove the weak ref from the cycle detectors set of
> known containers as soon as the ref object was no longer a
> container.

I'm not sure what you mean by "no longer a container".  If the
object defines the GC type flag the GC thinks its a container.

> When this is done by the tp_clear handler may be the problem;
> the GC machinery is removing the object from the list, and
> calls gc_list_remove() assuming that the object is still in the
> list, but after the tp_clear handler has been called.

I believe your problems are deeper than this.  If
PyObject_IS_GC(op) is true and op is reachable from other objects
known to the GC then op must be in the linked list.  I haven't
tracked down all the locations in gcmodule where this assumption
is made but visit_reachable is one example.

We could remove this restriction if we were willing to accept
some slowdown.  One way would be to add the invariant
(gc_next == NULL) if the object is not in the GC list.  PyObject_Init
and gc_list_remove would have to set this pointer.  Is it worth
doing?

  Neil



From gward at python.net  Fri Mar 23 16:04:07 2001
From: gward at python.net (Greg Ward)
Date: Fri, 23 Mar 2001 10:04:07 -0500
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <15034.33486.157946.686067@mace.lucasdigital.com>; from tommy@ilm.com on Thu, Mar 22, 2001 at 03:03:29PM -0800
References: <15034.33486.157946.686067@mace.lucasdigital.com>
Message-ID: <20010323100407.A8367@cthulhu.gerg.ca>

On 22 March 2001, Flying Cougar Burnette said:
> Can anyone think of a good reason why we don't import rlcompleter
> instead of readline by default?  I can give you a good reason why it
> *should*, but I'd rather not bore anyone with the details if I don't
> have to.

Haven't tried your patch, but when you "import rlcompleter" manually in
an interactive session, that's not enough.  You also have to call

  readline.parse_and_bind("tab: complete")

*Then* <tab> does the right thing (ie. completion in the interpreter's
global namespace).  I like it, but I'll bet Guido won't because you can
always do this:

  $ cat > ~/.pythonrc
  import readline, rlcompleter
  readline.parse_and_bind("tab: complete")

and put "export PYTHONSTARTUP=~/.pythonrc" in your ~/.profile (or
whatever) to achieve the same effect.

But I think having this convenience built-in for free would be a very
nice thing.  I used Python for over a year before I found out about
PYTHONSTARTUP, and it was another year after that that I learnedabout
readline.parse_and_bind().  Why not save future newbies the bother?

        Greg
-- 
Greg Ward - Linux nerd                                  gward at python.net
http://starship.python.net/~gward/
Animals can be driven crazy by placing too many in too small a pen. 
Homo sapiens is the only animal that voluntarily does this to himself.



From fdrake at acm.org  Fri Mar 23 16:22:37 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:22:37 -0500 (EST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323100407.A8367@cthulhu.gerg.ca>
References: <15034.33486.157946.686067@mace.lucasdigital.com>
	<20010323100407.A8367@cthulhu.gerg.ca>
Message-ID: <15035.27197.714696.640238@localhost.localdomain>

Greg Ward writes:
 > But I think having this convenience built-in for free would be a very
 > nice thing.  I used Python for over a year before I found out about
 > PYTHONSTARTUP, and it was another year after that that I learnedabout
 > readline.parse_and_bind().  Why not save future newbies the bother?

  Maybe.  Or perhaps you should have looked at the tutorial?  ;-)


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From jeremy at alum.mit.edu  Fri Mar 23 16:31:56 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Fri, 23 Mar 2001 10:31:56 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
Message-ID: <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>

Are there any more checkins coming?

In general -- are there any checkins other than documentation and a
fix for the GC/debug/weakref problem?

Jeremy



From fdrake at acm.org  Fri Mar 23 16:35:24 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:35:24 -0500 (EST)
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <20010323060824.C28875@glacier.fnational.com>
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com>
	<15034.61997.299305.456415@cj42289-a.reston1.va.home.com>
	<20010323060824.C28875@glacier.fnational.com>
Message-ID: <15035.27964.645249.362484@localhost.localdomain>

Neil Schemenauer writes:
 > I'm not sure what you mean by "no longer a container".  If the
 > object defines the GC type flag the GC thinks its a container.

  Given the assumptions you describe, removing the object from the
list isn't sufficient to not be a container.  ;-(  In which case
reverting the change (as Tim did) is probably the only way to do it.
  What I was looking for was a way to remove the weakref object from
the set of containers sooner, but appearantly that isn't possible as
long as the object's type is the only used to determine whether it is
a container.

 > I believe your problems are deeper than this.  If
 > PyObject_IS_GC(op) is true and op is reachable from other objects

  And this only considers the object's type; the object can't be
removed from the set of containers by call PyObject_GC_Fini().  (It
clearly can't while tp_clear is active for that object!)

 > known to the GC then op must be in the linked list.  I haven't
 > tracked down all the locations in gcmodule where this assumption
 > is made but visit_reachable is one example.

  So it's illegal to call PyObject_GC_Fini() anywhere but from the
destructor?  Please let me know so I can make this clear in the
documentation!

 > We could remove this restriction if we were willing to accept
 > some slowdown.  One way would be to add the invariant
 > (gc_next == NULL) if the object is not in the GC list.  PyObject_Init
 > and gc_list_remove would have to set this pointer.  Is it worth
 > doing?

  It's not at all clear that we need to remove the restriction --
documenting it would be required.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From ping at lfw.org  Fri Mar 23 16:44:54 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 07:44:54 -0800 (PST)
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
Message-ID: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Jeremy Hylton wrote:
> Are there any more checkins coming?

There are still issues in pydoc to be solved, but i think they can
be reasonably considered bugfixes rather than new features.  The
two main messy ones are getting reloading right (i am really hurting
for lack of a working find_module here!) and handling more strange
aliasing cases (HTMLgen, for example, provides many classes under
multiple names).  I hope it will be okay for me to work on these two
main fixes in the coming week.


-- ?!ng




From guido at digicool.com  Fri Mar 23 16:45:04 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 10:45:04 -0500
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: Your message of "Fri, 23 Mar 2001 10:31:56 EST."
             <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net> 
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>  
            <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net> 
Message-ID: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>

> Are there any more checkins coming?
> 
> In general -- are there any checkins other than documentation and a
> fix for the GC/debug/weakref problem?

I think one more from Ping, for a detail in sys.excepthook.

The GC issue is dealt with as far as I'm concerned -- any changes that
Neil suggests are too speculative to attempt this late in the game,
and Fred's patch has already been backed out by Tim.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar 23 16:49:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 10:49:13 -0500
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: Your message of "Fri, 23 Mar 2001 07:44:54 PST."
             <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org> 
References: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org> 
Message-ID: <200103231549.KAA10977@cj20424-a.reston1.va.home.com>

> There are still issues in pydoc to be solved, but i think they can
> be reasonably considered bugfixes rather than new features.  The
> two main messy ones are getting reloading right (i am really hurting
> for lack of a working find_module here!) and handling more strange
> aliasing cases (HTMLgen, for example, provides many classes under
> multiple names).  I hope it will be okay for me to work on these two
> main fixes in the coming week.

This is fine after the b2 release.  I consider pydoc a "1.0" release
anyway, so it's okay if its development speed is different than that
of the rest of Python!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From nas at arctrix.com  Fri Mar 23 16:53:15 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 07:53:15 -0800
Subject: [Python-Dev] RE: CVS Python is unstable
In-Reply-To: <15035.27964.645249.362484@localhost.localdomain>; from fdrake@acm.org on Fri, Mar 23, 2001 at 10:35:24AM -0500
References: <200103230434.XAA09033@cj20424-a.reston1.va.home.com> <LNBBLJKPBEHFEDALKOLCCELLJHAA.tim.one@home.com> <15034.61997.299305.456415@cj42289-a.reston1.va.home.com> <20010323060824.C28875@glacier.fnational.com> <15035.27964.645249.362484@localhost.localdomain>
Message-ID: <20010323075315.A29414@glacier.fnational.com>

On Fri, Mar 23, 2001 at 10:35:24AM -0500, Fred L. Drake, Jr. wrote:
>   So it's illegal to call PyObject_GC_Fini() anywhere but from the
> destructor?  Please let me know so I can make this clear in the
> documentation!

No, its okay as long as the object is not reachable from other
objects.  When tuples are added to the tuple free-list
PyObject_GC_Fini() is called.  When they are removed
PyObject_GC_Init() is called.  This is okay because free tubles
aren't reachable from anywhere else.

> It's not at all clear that we need to remove the restriction --
> documenting it would be required.

Yah, sorry about that.  I had forgotten about that restriction.
When I saw Tim's message things started to come back to me.  I
had to study the code a bit to remember how things worked.

  Neil



From aahz at panix.com  Fri Mar 23 16:46:54 2001
From: aahz at panix.com (aahz at panix.com)
Date: Fri, 23 Mar 2001 10:46:54 -0500 (EST)
Subject: [Python-Dev] Re: Python T-shirts
References: <mailman.985019605.8781.python-list@python.org>
Message-ID: <200103231546.KAA29483@panix6.panix.com>

[posted to c.l.py with cc to python-dev]

In article <mailman.985019605.8781.python-list at python.org>,
Guido van Rossum  <guido at digicool.com> wrote:
>
>At the conference we handed out T-shirts with the slogan on the back
>"Python: programming the way Guido indented it".  We've been asked if
>there are any left.  Well, we gave them all away, but we're ordering
>more.  You can get them for $10 + S+H.  Write to Melissa Light
><melissa at digicool.com>.  Be nice to her!

If you're in the USA, S&H is $3.50, for a total cost of $13.50.  Also,
at the conference, all t-shirts were size L, but Melissa says that
she'll take size requests (since they haven't actually ordered the
t-shirts yet).
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"I won't accept a model of the universe in which free will, omniscient
gods, and atheism are simultaneously true."  -- M
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"I won't accept a model of the universe in which free will, omniscient
gods, and atheism are simultaneously true."  -- M



From nas at arctrix.com  Fri Mar 23 16:55:15 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Fri, 23 Mar 2001 07:55:15 -0800
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 23, 2001 at 10:45:04AM -0500
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net> <15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net> <200103231545.KAA10940@cj20424-a.reston1.va.home.com>
Message-ID: <20010323075515.B29414@glacier.fnational.com>

On Fri, Mar 23, 2001 at 10:45:04AM -0500, Guido van Rossum wrote:
> The GC issue is dealt with as far as I'm concerned -- any changes that
> Neil suggests are too speculative to attempt this late in the game,
> and Fred's patch has already been backed out by Tim.

I agree.

  Neil



From ping at lfw.org  Fri Mar 23 16:56:56 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 07:56:56 -0800 (PST)
Subject: [Python-Dev] Re: any more checkins
In-Reply-To: <Pine.LNX.4.10.10103230741110.4368-100000@skuld.kingmanhall.org>
Message-ID: <Pine.LNX.4.10.10103230750340.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Ka-Ping Yee wrote:
> two main messy ones are getting reloading right (i am really hurting
> for lack of a working find_module here!)

I made an attempt at this last night but didn't finish, so reloading
isn't correct at the moment for submodules in packages.  It appears
that i'm going to have to built a few pieces of infrastructure to make
it work well: a find_module that understands packages, a sure-fire
way of distinguishing the different kinds of ImportError, and a
reliable reloader in the end.  The particular issue of incompletely-
imported modules is especially thorny, and i don't know if there's
going to be any good solution for that.

Oh, and it would be nice for the "help" object to be a little more
informative, but that could just be considered documentation; and
a test_pydoc suite would be good.


-- ?!ng




From fdrake at acm.org  Fri Mar 23 16:55:10 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 10:55:10 -0500 (EST)
Subject: [Python-Dev] Re: [Python-checkins] CVS: python/dist/src/Lib inspect.py,1.10,1.11
In-Reply-To: <200103231545.KAA10940@cj20424-a.reston1.va.home.com>
References: <E14gTVp-0003Yz-00@usw-pr-cvs1.sourceforge.net>
	<15035.27756.13676.179144@w221.z064000254.bwi-md.dsl.cnc.net>
	<200103231545.KAA10940@cj20424-a.reston1.va.home.com>
Message-ID: <15035.29150.755915.883372@localhost.localdomain>

Guido van Rossum writes:
 > The GC issue is dealt with as far as I'm concerned -- any changes that
 > Neil suggests are too speculative to attempt this late in the game,
 > and Fred's patch has already been backed out by Tim.

  Agreed -- I don't think we need to change this further for 2.1.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From thomas at xs4all.net  Fri Mar 23 17:31:38 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 23 Mar 2001 17:31:38 +0100
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323100407.A8367@cthulhu.gerg.ca>; from gward@python.net on Fri, Mar 23, 2001 at 10:04:07AM -0500
References: <15034.33486.157946.686067@mace.lucasdigital.com> <20010323100407.A8367@cthulhu.gerg.ca>
Message-ID: <20010323173138.E13066@xs4all.nl>

On Fri, Mar 23, 2001 at 10:04:07AM -0500, Greg Ward wrote:

> But I think having this convenience built-in for free would be a very
> nice thing.  I used Python for over a year before I found out about
> PYTHONSTARTUP, and it was another year after that that I learnedabout
> readline.parse_and_bind().  Why not save future newbies the bother?

And break all those poor users who use tab in interactive mode (like *me*)
to mean tab, not 'complete me please' ? No, please don't do that :)

-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From fdrake at acm.org  Fri Mar 23 18:43:55 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 12:43:55 -0500 (EST)
Subject: [Python-Dev] Doc/ tree frozen for 2.1b2 release
Message-ID: <15035.35675.217841.967860@localhost.localdomain>

  I'm freezing the doc tree until after the 2.1b2 release is made.
Please do not make any further checkins there.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From moshez at zadka.site.co.il  Fri Mar 23 20:08:22 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 23 Mar 2001 21:08:22 +0200
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
Message-ID: <E14gWv8-0001OB-00@darjeeling>

Now that we have rich comparisons, I've suddenly realized they are
not rich enough. Consider a set type.

>>> a = set([1,2])
>>> b = set([1,3])
>>> a>b
0
>>> a<b
0
>>> max(a,b) == a
1

While I'd like

>>> max(a,b) == set([1,2,3])
>>> min(a,b) == set([1])

In current Python, there's no way to do it.
I'm still thinking about this. If it bothers anyone else, I'd
be happy to know about it.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From fdrake at localhost.localdomain  Fri Mar 23 20:11:52 2001
From: fdrake at localhost.localdomain (Fred Drake)
Date: Fri, 23 Mar 2001 14:11:52 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010323191152.3019628995@localhost.localdomain>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


Documentation for the second beta release of Python 2.1.

This includes information on future statements and lexical scoping,
and weak references.  Much of the module documentation has been
improved as well.




From guido at digicool.com  Fri Mar 23 20:20:21 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 14:20:21 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: Your message of "Fri, 23 Mar 2001 21:08:22 +0200."
             <E14gWv8-0001OB-00@darjeeling> 
References: <E14gWv8-0001OB-00@darjeeling> 
Message-ID: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>

> Now that we have rich comparisons, I've suddenly realized they are
> not rich enough. Consider a set type.
> 
> >>> a = set([1,2])
> >>> b = set([1,3])
> >>> a>b
> 0
> >>> a<b
> 0

I'd expect both of these to raise an exception.

> >>> max(a,b) == a
> 1
> 
> While I'd like
> 
> >>> max(a,b) == set([1,2,3])
> >>> min(a,b) == set([1])

You shouldn't call that max() or min().  These functions are supposed
to return one of their arguments (or an item from their argument
collection), not a composite.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From ping at lfw.org  Fri Mar 23 20:35:43 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Fri, 23 Mar 2001 11:35:43 -0800 (PST)
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <E14gWv8-0001OB-00@darjeeling>
Message-ID: <Pine.LNX.4.10.10103231134360.4368-100000@skuld.kingmanhall.org>

On Fri, 23 Mar 2001, Moshe Zadka wrote:
> >>> a = set([1,2])
> >>> b = set([1,3])
[...]
> While I'd like
> 
> >>> max(a,b) == set([1,2,3])
> >>> min(a,b) == set([1])

The operation you're talking about isn't really max or min.

Why not simply write:

    >>> a | b
    [1, 2, 3]
    >>> a & b
    [1]

?


-- ?!ng




From fdrake at acm.org  Fri Mar 23 21:38:55 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Fri, 23 Mar 2001 15:38:55 -0500 (EST)
Subject: [Python-Dev] Anyone using weakrefs?
Message-ID: <15035.46175.599654.851399@localhost.localdomain>

  Is anyone out there playing with the weak references support yet?
I'd *really* appreciate receiving a short snippet of non-contrived
code that makes use of weak references to use in the documentation.
  Thanks!


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From tommy at ilm.com  Fri Mar 23 22:12:49 2001
From: tommy at ilm.com (Flying Cougar Burnette)
Date: Fri, 23 Mar 2001 13:12:49 -0800 (PST)
Subject: [Python-Dev] auto-import rlcompleter instead of readline?
In-Reply-To: <20010323173138.E13066@xs4all.nl>
References: <15034.33486.157946.686067@mace.lucasdigital.com>
	<20010323100407.A8367@cthulhu.gerg.ca>
	<20010323173138.E13066@xs4all.nl>
Message-ID: <15035.48030.112179.717830@mace.lucasdigital.com>

But if we just change the readline import to rlcompleter and *don't*
do the parse_and_bind trick then your TABs will not be impacted,
correct?  Will we lose anything by making this switch?



Thomas Wouters writes:
| On Fri, Mar 23, 2001 at 10:04:07AM -0500, Greg Ward wrote:
| 
| > But I think having this convenience built-in for free would be a very
| > nice thing.  I used Python for over a year before I found out about
| > PYTHONSTARTUP, and it was another year after that that I learnedabout
| > readline.parse_and_bind().  Why not save future newbies the bother?
| 
| And break all those poor users who use tab in interactive mode (like *me*)
| to mean tab, not 'complete me please' ? No, please don't do that :)
| 
| -- 
| Thomas Wouters <thomas at xs4all.net>
| 
| Hi! I'm a .signature virus! copy me into your .signature file to help me spread!
| 
| _______________________________________________
| Python-Dev mailing list
| Python-Dev at python.org
| http://mail.python.org/mailman/listinfo/python-dev



From moshez at zadka.site.co.il  Fri Mar 23 21:30:12 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 23 Mar 2001 22:30:12 +0200
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>
References: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>, <E14gWv8-0001OB-00@darjeeling>
Message-ID: <E14gYCK-0001VT-00@darjeeling>

On Fri, 23 Mar 2001 14:20:21 -0500, Guido van Rossum <guido at digicool.com> wrote:

> > >>> a = set([1,2])
> > >>> b = set([1,3])
> > >>> a>b
> > 0
> > >>> a<b
> > 0
> 
> I'd expect both of these to raise an exception.
 
I wouldn't. a>b means "does a contain b". It doesn't.
There *is* a partial order on sets: partial means a<b, a>b, a==b can all
be false, but that there is a meaning for all of them.

FWIW, I'd be for a partial order on complex numbers too 
(a<b iff a.real<b.real and a.imag<b.imag)

> > >>> max(a,b) == a
> > 1
> > 
> > While I'd like
> > 
> > >>> max(a,b) == set([1,2,3])
> > >>> min(a,b) == set([1])
> 
> You shouldn't call that max() or min().

I didn't. Mathematicians do.
The mathematical definition for max() I learned in Calculus 101 was
"the smallest element which is > then all arguments" (hence, properly speaking,
max should also specify the set in which it takes place. Doesn't seem to
matter in real life)

>  These functions are supposed
> to return one of their arguments

Why? 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Fri Mar 23 22:41:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 16:41:14 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: Your message of "Fri, 23 Mar 2001 22:30:12 +0200."
             <E14gYCK-0001VT-00@darjeeling> 
References: <200103231920.OAA13864@cj20424-a.reston1.va.home.com>, <E14gWv8-0001OB-00@darjeeling>  
            <E14gYCK-0001VT-00@darjeeling> 
Message-ID: <200103232141.QAA14771@cj20424-a.reston1.va.home.com>

> > > >>> a = set([1,2])
> > > >>> b = set([1,3])
> > > >>> a>b
> > > 0
> > > >>> a<b
> > > 0
> > 
> > I'd expect both of these to raise an exception.
>  
> I wouldn't. a>b means "does a contain b". It doesn't.
> There *is* a partial order on sets: partial means a<b, a>b, a==b can all
> be false, but that there is a meaning for all of them.

Agreed, you can define < and > any way you want on your sets.  (Why
not <= and >=?  Don't a<b suggest that b has at least one element not
in a?)

> FWIW, I'd be for a partial order on complex numbers too 
> (a<b iff a.real<b.real and a.imag<b.imag)

Where is that useful?  Are there mathematicians who define it this way?

> > > >>> max(a,b) == a
> > > 1
> > > 
> > > While I'd like
> > > 
> > > >>> max(a,b) == set([1,2,3])
> > > >>> min(a,b) == set([1])
> > 
> > You shouldn't call that max() or min().
> 
> I didn't. Mathematicians do.
> The mathematical definition for max() I learned in Calculus 101 was
> "the smallest element which is > then all arguments" (hence, properly speaking,
> max should also specify the set in which it takes place. Doesn't seem to
> matter in real life)

Sorry, mathematicians can overload stuff that you can't in Python.
Write your own operator, function or method to calculate this, just
don't call it max.  And as someone else remarked, a|b and a&b might
already fit this bill.

> >  These functions are supposed
> > to return one of their arguments
> 
> Why?


From tim.one at home.com  Fri Mar 23 22:47:41 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 16:47:41 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <E14gYCK-0001VT-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>

[Moshe]
> The mathematical definition for max() I learned in Calculus 101 was
> "the smallest element which is > then all arguments"

Then I guess American and Dutch calculus are different.  Assuming you meant
to type >=, that's the definition of what we called the "least upper bound"
(or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
called "greatest lower bound" (or "glb") or "infimum".  I've never before
heard max or min used for these.  In lattices, a glb operator is often called
"meet" and a lub operator "join", but again I don't think I've ever seen them
called max or min.

[Guido]
>>  These functions are supposed to return one of their arguments

[Moshe]
> Why?

Because Guido said so <wink>.  Besides, it's apparently the only meaning he
ever heard of; me too.




From esr at thyrsus.com  Fri Mar 23 23:08:52 2001
From: esr at thyrsus.com (Eric S. Raymond)
Date: Fri, 23 Mar 2001 17:08:52 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>; from tim.one@home.com on Fri, Mar 23, 2001 at 04:47:41PM -0500
References: <E14gYCK-0001VT-00@darjeeling> <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>
Message-ID: <20010323170851.A2802@thyrsus.com>

Tim Peters <tim.one at home.com>:
> [Moshe]
> > The mathematical definition for max() I learned in Calculus 101 was
> > "the smallest element which is > then all arguments"
> 
> Then I guess American and Dutch calculus are different.  Assuming you meant
> to type >=, that's the definition of what we called the "least upper bound"
> (or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
> called "greatest lower bound" (or "glb") or "infimum".  I've never before
> heard max or min used for these.  In lattices, a glb operator is often called
> "meet" and a lub operator "join", but again I don't think I've ever seen them
> called max or min.

Eric, speaking as a defrocked mathematician who was at one time rather
intimate with lattice theory, concurs.  However, Tim, I suspect you
will shortly discover that Moshe ain't Dutch.  I didn't ask and I
could be wrong, but at PC9 Moshe's accent and body language fairly
shouted "Israeli" at me.
-- 
		<a href="http://www.tuxedo.org/~esr/">Eric S. Raymond</a>

[President Clinton] boasts about 186,000 people denied firearms under
the Brady Law rules.  The Brady Law has been in force for three years.  In
that time, they have prosecuted seven people and put three of them in
prison.  You know, the President has entertained more felons than that at
fundraising coffees in the White House, for Pete's sake."
	-- Charlton Heston, FOX News Sunday, 18 May 1997



From tim.one at home.com  Fri Mar 23 23:11:50 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 23 Mar 2001 17:11:50 -0500
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <20010323170851.A2802@thyrsus.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEPEJHAA.tim.one@home.com>

> Eric, speaking as a defrocked mathematician who was at one time rather
> intimate with lattice theory, concurs.  However, Tim, I suspect you
> will shortly discover that Moshe ain't Dutch.  I didn't ask and I
> could be wrong, but at PC9 Moshe's accent and body language fairly
> shouted "Israeli" at me.

Well, applying Moshe's theory of max to my message, you should have released
that Israeli = max{American, Dutch}.  That is

    Then I guess American and Dutch calculus are different.

was missing

    (from Israeli calculus)

As you'll shortly discover from his temper when his perfidious schemes are
frustrated, Guido is the Dutch guy in this debate <wink>.

although-i-prefer-to-be-thought-of-as-plutonian-ly y'rs  - tim




From guido at digicool.com  Fri Mar 23 23:29:02 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 23 Mar 2001 17:29:02 -0500
Subject: [Python-Dev] Python 2.1b2 released
Message-ID: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>

On time, and with a minimum of fuss, we've released Python 2.1b2.
Thanks again to the many developers who contributed!

Check it out on the Python website:

    http://www.python.org/2.1/

or on SourceForge:

    http://sourceforge.net/project/showfiles.php?group_id=5470&release_id=28334

As it behooves a second beta release, there's no really big news since
2.1b1 was released on March 2:

- Bugs fixed and documentation added. There's now an appendix of the
  Reference Manual documenting nested scopes:

    http://python.sourceforge.net/devel-docs/ref/futures.html

- When nested scopes are enabled by "from __future__ import
  nested_scopes", this also applies to exec, eval() and execfile(),
  and into the interactive interpreter (when using -i).

- Assignment to the internal global variable __debug__ is now illegal.

- unittest.py, a unit testing framework by Steve Purcell (PyUNIT,
  inspired by JUnit), is now part of the standard library.  See the
  PyUnit webpage for documentation:

    http://pyunit.sourceforge.net/

Andrew Kuchling has written (and is continuously updating) an
extensive overview: What's New in Python 2.1:

    http://www.amk.ca/python/2.1/

See also the Release notes posted on SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=28334

We are planning to make the final release of Python 2.1 on April 13;
we may release a release candidate a week earlier.

We're also planning a bugfix release for Python 2.0, dubbed 2.0.1; we
don't have a release schedule for this yet.  We could use a volunteer
to act as the bug release manager!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Sat Mar 24 00:54:19 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 15:54:19 -0800
Subject: [Python-Dev] [Fwd: Python 2.1b2 released]
Message-ID: <3ABBE22B.DBAE4552@ActiveState.com>


-------- Original Message --------
Subject: Python 2.1b2 released
Date: Fri, 23 Mar 2001 17:29:02 -0500
From: Guido van Rossum <guido at digicool.com>
To: python-dev at python.org, Python mailing list
<python-list at python.org>,python-announce at python.org

On time, and with a minimum of fuss, we've released Python 2.1b2.
Thanks again to the many developers who contributed!

Check it out on the Python website:

    http://www.python.org/2.1/

or on SourceForge:

   
http://sourceforge.net/project/showfiles.php?group_id=5470&release_id=28334

As it behooves a second beta release, there's no really big news since
2.1b1 was released on March 2:

- Bugs fixed and documentation added. There's now an appendix of the
  Reference Manual documenting nested scopes:

    http://python.sourceforge.net/devel-docs/ref/futures.html

- When nested scopes are enabled by "from __future__ import
  nested_scopes", this also applies to exec, eval() and execfile(),
  and into the interactive interpreter (when using -i).

- Assignment to the internal global variable __debug__ is now illegal.

- unittest.py, a unit testing framework by Steve Purcell (PyUNIT,
  inspired by JUnit), is now part of the standard library.  See the
  PyUnit webpage for documentation:

    http://pyunit.sourceforge.net/

Andrew Kuchling has written (and is continuously updating) an
extensive overview: What's New in Python 2.1:

    http://www.amk.ca/python/2.1/

See also the Release notes posted on SourceForge:

    http://sourceforge.net/project/shownotes.php?release_id=28334

We are planning to make the final release of Python 2.1 on April 13;
we may release a release candidate a week earlier.

We're also planning a bugfix release for Python 2.0, dubbed 2.0.1; we
don't have a release schedule for this yet.  We could use a volunteer
to act as the bug release manager!

--Guido van Rossum (home page: http://www.python.org/~guido/)

-- 
http://mail.python.org/mailman/listinfo/python-list



From paulp at ActiveState.com  Sat Mar 24 01:15:30 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 16:15:30 -0800
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" 
 Comparisons?
References: <E14gYCK-0001VT-00@darjeeling> <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com> <20010323170851.A2802@thyrsus.com>
Message-ID: <3ABBE722.B29684A1@ActiveState.com>

"Eric S. Raymond" wrote:
> 
>...
> 
> Eric, speaking as a defrocked mathematician who was at one time rather
> intimate with lattice theory, concurs.  However, Tim, I suspect you
> will shortly discover that Moshe ain't Dutch.  I didn't ask and I
> could be wrong, but at PC9 Moshe's accent and body language fairly
> shouted "Israeli" at me.

Not to mention his top-level-domain. Sorry, I couldn't resist.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From paulp at ActiveState.com  Sat Mar 24 01:21:10 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 16:21:10 -0800
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" 
 Comparisons?
References: <LNBBLJKPBEHFEDALKOLCAEPBJHAA.tim.one@home.com>
Message-ID: <3ABBE876.8EC91425@ActiveState.com>

Tim Peters wrote:
> 
> [Moshe]
> > The mathematical definition for max() I learned in Calculus 101 was
> > "the smallest element which is > then all arguments"
> 
> Then I guess American and Dutch calculus are different.  Assuming you meant
> to type >=, that's the definition of what we called the "least upper bound"
> (or "lub") or "supremum" (or "sup"); and what I suppose you call "min" we
> called "greatest lower bound" (or "glb") or "infimum".  

As long as we're shooting the shit on a Friday afternoon...

http://www.emba.uvm.edu/~read/TI86/maxmin.html
http://www.math.com/tables/derivatives/extrema.htm

Look at that domain name. Are you going to argue with that??? A
corporation dedicated to mathematics?

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From paulp at ActiveState.com  Sat Mar 24 02:16:03 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Fri, 23 Mar 2001 17:16:03 -0800
Subject: [Python-Dev] Making types behave like classes
Message-ID: <3ABBF553.274D535@ActiveState.com>

These are some half-baked ideas about getting classes and types to look
more similar. I would like to know whether they are workable or not and
so I present them to the people best equipped to tell me.

Many extension types have a __getattr__ that looks like this:

static PyObject *
Xxo_getattr(XxoObject *self, char *name)
{
	// try to do some work with known attribute names, else:

	return Py_FindMethod(Xxo_methods, (PyObject *)self, name);
}

Py_FindMethod can (despite its name) return any Python object, including
ordinary (non-function) attributes. It also has complete access to the
object's state and type through the self parameter. Here's what we do
today for __doc__:

		if (strcmp(name, "__doc__") == 0) {
			char *doc = self->ob_type->tp_doc;
			if (doc != NULL)
				return PyString_FromString(doc);
		}

Why can't we do this for all magic methods? 

	* __class__ would return for the type object
	* __add__,__len__, __call__, ... would return a method wrapper around
the appropriate slot, 	
	* __init__ might map to a no-op

I think that Py_FindMethod could even implement inheritance between
types if we wanted.

We already do this magic for __methods__ and __doc__. Why not for all of
the magic methods?

Many other types implement no getattr at all (the slot is NULL). In that
case, I think that we have carte blanche to define their getattr
behavior as instance-like as possible.

Finally there are the types with getattrs that do not dispatch to
Py_FindMethod. we can just change those over manually. Extension authors
will do the same when they realize that their types are not inheriting
the features that the other types are.

Benefits:

	* objects based on extension types would "look more like" classes to
Python programmers so there is less confusion about how they are
different

	* users could stop using the type() function to get concrete types and
instead use __class__. After a version or two, type() could be formally
deprecated in favor of isinstance and __class__.

	* we will have started some momentum towards type/class unification
which we could continue on into __setattr__ and subclassing.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From jafo at tummy.com  Sat Mar 24 07:50:08 2001
From: jafo at tummy.com (Sean Reifschneider)
Date: Fri, 23 Mar 2001 23:50:08 -0700
Subject: [Python-Dev] Python 2.1b2 SRPM (was: Re: Python 2.1b2 released)
In-Reply-To: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>; from guido@digicool.com on Fri, Mar 23, 2001 at 05:29:02PM -0500
References: <200103232229.RAA19777@cj20424-a.reston1.va.home.com>
Message-ID: <20010323235008.A30668@tummy.com>

Shy of RPMs because of library or other dependancy problems with most of
the RPMs you pick up?  The cure, in my experience is to pick up an SRPM.
All you need to do to build a binary package tailored to your system is run
"rpm --rebuild <packagename>.src.rpm".

I've just put up an SRPM of the 2.1b2 release at:

   ftp://ftp.tummy.com/pub/tummy/RPMS/SRPMS/

Again, this one builds the executable as "python2.1", and can be installed
along-side your normal Python on the system.  Want to check out a great new
feature?  Type "python2.1 /usr/bin/pydoc string".

Download the SRPM from above, and most users can install a binary built
against exactly the set of packages on their system by doing:

   rpm --rebuild python-2.1b2-1tummy.src.rpm
   rpm -i /usr/src/redhat/RPMS/i386/python*2.1b1-1tummy.i386.rpm

Note that this release enables "--with-pymalloc".  If you experience
problems with modules you use, please report the module and how it can be
reproduced so that these issues can be taken care of.

Enjoy,
Sean
-- 
 Total strangers need love, too; and I'm stranger than most.
Sean Reifschneider, Inimitably Superfluous <jafo at tummy.com>
tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python



From moshez at zadka.site.co.il  Sat Mar 24 07:53:03 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 08:53:03 +0200
Subject: [Python-Dev] test_minidom crash
Message-ID: <E14ghv5-0003fu-00@darjeeling>

The bug is in Lib/xml/__init__.py

__version__ = "1.9".split()[1]

I don't know what it was supposed to be, but .split() without an
argument splits on whitespace. best guess is "1.9".split('.') ??

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Sat Mar 24 08:30:47 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 09:30:47 +0200
Subject: [Python-Dev] Py2.1b2/bsddb build problems
Message-ID: <E14giVb-00051a-00@darjeeling>

setup.py needs the following lines:

        if self.compiler.find_library_file(lib_dirs, 'db1'):
            dblib = ['db1']

(right after 

        if self.compiler.find_library_file(lib_dirs, 'db'):
            dblib = ['db'])

To creat bsddb correctly on my system (otherwise it gets installed
but cannot be imported)

I'm using Debian sid 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Sat Mar 24 08:52:28 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 02:52:28 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14ghv5-0003fu-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>

[Moshe Zadka]
> The bug is in Lib/xml/__init__.py
>
> __version__ = "1.9".split()[1]

Believe me, we would not have shipped 2.1b2 if it failed any of the std tests
(and I ran the whole suite 8 ways:  with and without nuking all .pyc/.pyo
files first, with and without -O, and under release and debug builds).

> I don't know what it was supposed to be, but .split() without an
> argument splits on whitespace. best guess is "1.9".split('.') ??

On my box that line is:

__version__ = "$Revision: 1.9 $".split()[1]

So this is this some CVS retrieval screwup?




From moshez at zadka.site.co.il  Sat Mar 24 09:01:44 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 10:01:44 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCOEAGJIAA.tim.one@home.com>
Message-ID: <E14gizY-0005B1-00@darjeeling>

On Sat, 24 Mar 2001 02:52:28 -0500, "Tim Peters" <tim.one at home.com> wrote:
 
> Believe me, we would not have shipped 2.1b2 if it failed any of the std tests
> (and I ran the whole suite 8 ways:  with and without nuking all .pyc/.pyo
> files first, with and without -O, and under release and debug builds).
> 
> > I don't know what it was supposed to be, but .split() without an
> > argument splits on whitespace. best guess is "1.9".split('.') ??
> 
> On my box that line is:
> 
> __version__ = "$Revision: 1.9 $".split()[1]
> 
> So this is this some CVS retrieval screwup?

Probably.
But nobody cares about your machine <1.9 wink>
In the Py2.1b2 you shipped, the line says
'''
__version__ = "1.9".split()[1]
'''
It's line 18.
That, or someone managed to crack one of the routers from SF to me.

should-we-start-signing-our-releases-ly y'rs, Z. 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Sat Mar 24 09:19:20 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 03:19:20 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14gizY-0005B1-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCIEAIJIAA.tim.one@home.com>

> Probably.
> But nobody cares about your machine <1.9 wink>
> In the Py2.1b2 you shipped, the line says
> '''
> __version__ = "1.9".split()[1]
> '''
> It's line 18.

No, in the 2.1b2 I installed on my machine, from the installer I sucked down
from SourceForge, the line is what I said it was:

__version__ = "$Revision: 1.9 $".split()[1]

So you're talking about something else, but I don't know what ...

Ah, OK!  It's that silly source tarball, Python-2.1b2.tgz.  I just sucked
that down from SF, and *that* does have the damaged line just as you say (in
Lib/xml/__init__.py).

I guess we're going to have to wait for Guido to wake up and explain how this
got hosed ... in the meantime, switch to Windows and use a real installer
<wink>.




From martin at loewis.home.cs.tu-berlin.de  Sat Mar 24 09:19:44 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 09:19:44 +0100
Subject: [Python-Dev] (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
Message-ID: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>

>> The mathematical definition for max() I learned in Calculus 101 was
>> "the smallest element which is > then all arguments"
>
>Then I guess American and Dutch calculus are different.
[from Israeli calculus]

The missing bit linking the two (sup and max) is

"The supremum of S is equal to its maximum if S possesses a greatest
member."
[http://www.cenius.fsnet.co.uk/refer/maths/articles/s/supremum.html]

So given a subset of a lattice, it may not have a maximum, but it will
always have a supremum. It appears that the Python max function
differs from the mathematical maximum in that respect: max will return
a value, even if that is not the "largest value"; the mathematical
maximum might give no value.

Regards,
Martin




From moshez at zadka.site.co.il  Sat Mar 24 10:13:46 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 11:13:46 +0200
Subject: [Python-Dev] setup.py is too aggressive
Message-ID: <E14gk7G-0005Wh-00@darjeeling>

It seems to me setup.py tries to build libraries even when it's impossible
E.g., I had to add the patch attached so I will get no more ImportErrors
where the module shouts at me that it could not find a symbol.

*** Python-2.1b2/setup.py	Wed Mar 21 09:44:53 2001
--- Python-2.1b2-changed/setup.py	Sat Mar 24 10:49:20 2001
***************
*** 326,331 ****
--- 326,334 ----
              if (self.compiler.find_library_file(lib_dirs, 'ndbm')):
                  exts.append( Extension('dbm', ['dbmmodule.c'],
                                         libraries = ['ndbm'] ) )
+             elif (self.compiler.find_library_file(lib_dirs, 'db1')):
+                 exts.append( Extension('dbm', ['dbmmodule.c'],
+                                        libraries = ['db1'] ) )
              else:
                  exts.append( Extension('dbm', ['dbmmodule.c']) )
  
***************
*** 348,353 ****
--- 351,358 ----
          dblib = []
          if self.compiler.find_library_file(lib_dirs, 'db'):
              dblib = ['db']
+         if self.compiler.find_library_file(lib_dirs, 'db1'):
+             dblib = ['db1']
          
          db185_incs = find_file('db_185.h', inc_dirs,
                                 ['/usr/include/db3', '/usr/include/db2'])

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Sat Mar 24 11:19:15 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 05:19:15 -0500
Subject: [Python-Dev] RE: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEAMJIAA.tim.one@home.com>

[Martin v. Loewis]
> The missing bit linking the two (sup and max) is
>
> "The supremum of S is equal to its maximum if S possesses a greatest
> member."
> [http://www.cenius.fsnet.co.uk/refer/maths/articles/s/supremum.html]
>
> So given a subset of a lattice, it may not have a maximum, but it will
> always have a supremum. It appears that the Python max function
> differs from the mathematical maximum in that respect: max will return
> a value, even if that is not the "largest value"; the mathematical
> maximum might give no value.

Note that the definition of supremum given on that page can't be satisfied in
general for lattices.  For example "x divides y" induces a lattice, where gcd
is the glb and lcm (least common multiple) the lub.  The set {6, 15} then has
lub 30, but is not a supremum under the 2nd clause of that page because 10
divides 30 but neither of {6, 15} (so there's an element "less than" (== that
divides) 30 which no element in the set is "larger than").

So that defn. is suitable for real analysis, but the more general defn. of
sup(S) is simply that X = sup(S) iff X is an upper bound for S (same as the
1st clause on the referenced page), and that every upper bound Y of S is >=
X.  That works for lattices too.

Since Python's max works on sequences, and never terminates given an infinite
sequence, it only makes *sense* to ask what max(S) returns for finite
sequences S.  Under a total ordering, every finite set S has a maximal
element (an element X of S such that for all Y in S Y <= X), and Python's
max(S) does return one.  If there's only a partial ordering, Python's max()
is unpredictable (may or may not blow up, depending on the order the elements
are listed; e.g., [a, b, c] where a<b and c<b but a and c aren't comparable:
in that order, max returns b, but if given in order [a, c, b] max blows up).

Since this is all obvious to the most casual observer <0.9 wink>, it remains
unclear what the brouhaha is about.




From loewis at informatik.hu-berlin.de  Sat Mar 24 13:02:53 2001
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 24 Mar 2001 13:02:53 +0100 (MET)
Subject: [Python-Dev] setup.py is too aggressive
Message-ID: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>

> It seems to me setup.py tries to build libraries even when it's
> impossible E.g., I had to add the patch attached so I will get no
> more ImportErrors where the module shouts at me that it could not
> find a symbol.

The more general problem here is that building of a module may fail:
Even if a library is detected correctly, it might be that additional
libraries are needed. In some cases, it helps to put the correct
module line into Modules/Setup (which would have helped in your case);
then setup.py will not attempt to build the module.

However, there may be cases where a module cannot be build at all:
either some libraries are missing, or the module won't work on the
system for some other reason (e.g. since the system library it relies
on has some bug).

There should be a mechanism to tell setup.py not to build a module at
all. Since it is looking into Modules/Setup anyway, perhaps a

*excluded*
dbm

syntax in Modules/Setup would be appropriate? Of course, makesetup
needs to be taught such a syntax. Alternatively, an additional
configuration file or command line options might work.

In any case, distributors are certainly advised to run the testsuite
and potentially remove or fix modules for which the tests fail.

Regards,
Martin



From moshez at zadka.site.co.il  Sat Mar 24 13:09:04 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 14:09:04 +0200
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
Message-ID: <E14gmqu-0006Ex-00@darjeeling>

On Sat, 24 Mar 2001, Martin von Loewis <loewis at informatik.hu-berlin.de> wrote:

> In any case, distributors are certainly advised to run the testsuite
> and potentially remove or fix modules for which the tests fail.

These, however, aren't flagged as failures -- they're flagged as
ImportErrors which are ignored during tests
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From loewis at informatik.hu-berlin.de  Sat Mar 24 13:23:47 2001
From: loewis at informatik.hu-berlin.de (Martin von Loewis)
Date: Sat, 24 Mar 2001 13:23:47 +0100 (MET)
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <E14gmqu-0006Ex-00@darjeeling> (message from Moshe Zadka on Sat,
	24 Mar 2001 14:09:04 +0200)
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de> <E14gmqu-0006Ex-00@darjeeling>
Message-ID: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>

> > In any case, distributors are certainly advised to run the testsuite
> > and potentially remove or fix modules for which the tests fail.
> 
> These, however, aren't flagged as failures -- they're flagged as
> ImportErrors which are ignored during tests

I see. Is it safe to say, for all modules in the core, that importing
them has no "dangerous" side effect? In that case, setup.py could
attempt to import them after they've been build, and delete the ones
that fail to import. Of course, that would also delete modules where
setting LD_LIBRARY_PATH might cure the problem...

Regards,
Martin



From moshez at zadka.site.co.il  Sat Mar 24 13:24:48 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 14:24:48 +0200
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>
References: <200103241223.NAA19062@pandora.informatik.hu-berlin.de>, <200103241202.NAA19000@pandora.informatik.hu-berlin.de> <E14gmqu-0006Ex-00@darjeeling>
Message-ID: <E14gn68-0006Jk-00@darjeeling>

On Sat, 24 Mar 2001, Martin von Loewis <loewis at informatik.hu-berlin.de> wrote:

> I see. Is it safe to say, for all modules in the core, that importing
> them has no "dangerous" side effect? In that case, setup.py could
> attempt to import them after they've been build, and delete the ones
> that fail to import. Of course, that would also delete modules where
> setting LD_LIBRARY_PATH might cure the problem...

So people who build will have to set LD_LIB_PATH too. I don't see a problem
with that...
(particularily since this will mean only that if the tests pass, only modules
which were tested will be installed, theoretically...)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Sat Mar 24 14:10:21 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 08:10:21 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 08:53:03 +0200."
             <E14ghv5-0003fu-00@darjeeling> 
References: <E14ghv5-0003fu-00@darjeeling> 
Message-ID: <200103241310.IAA21370@cj20424-a.reston1.va.home.com>

> The bug is in Lib/xml/__init__.py
> 
> __version__ = "1.9".split()[1]
> 
> I don't know what it was supposed to be, but .split() without an
> argument splits on whitespace. best guess is "1.9".split('.') ??

This must be because I used "cvs export -kv" to create the tarball
this time.  This may warrant a release update :-(

--Guido van Rossum (home page: http://www.python.org/~guido/)



From ping at lfw.org  Sat Mar 24 14:33:05 2001
From: ping at lfw.org (Ka-Ping Yee)
Date: Sat, 24 Mar 2001 05:33:05 -0800 (PST)
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <200103240819.f2O8JiF01844@mira.informatik.hu-berlin.de>
Message-ID: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>

On Sat, 24 Mar 2001, Martin v. Loewis wrote:
> So given a subset of a lattice, it may not have a maximum, but it will
> always have a supremum. It appears that the Python max function
> differs from the mathematical maximum in that respect: max will return
> a value, even if that is not the "largest value"; the mathematical
> maximum might give no value.

Ah, but in Python most collections are usually finite. :)


-- ?!ng




From guido at digicool.com  Sat Mar 24 14:33:59 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 08:33:59 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 08:53:03 +0200."
             <E14ghv5-0003fu-00@darjeeling> 
References: <E14ghv5-0003fu-00@darjeeling> 
Message-ID: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>

OK, here's what I've done.  I've done a new cvs export of the r21b2
tag, this time *without* specifying -kv.  I've tarred it up and
uploaded it to SF and python.org.  The new tarball is called
Python-2.1b2a.tgz to distinguish it from the broken one.  I've removed
the old, broken tarball, and added a note to the python.org/2.1/ page
about the new tarball.

Background:

"cvs export -kv" changes all CVS version insertions from "$Release:
1.9$" to "1.9".  (It affects other CVS inserts too.)  This is so that
the versions don't get changed when someone else incorporates it into
their own CVS tree, which used to be a common usage pattern.

The question is, should we bother to make the code robust under
releases with -kv or not?  I used to write code that dealt with the
fact that __version__ could be either "$Release: 1.9$" or "1.9", but
clearly that bit of arcane knowledge got lost.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From gmcm at hypernet.com  Sat Mar 24 14:46:33 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sat, 24 Mar 2001 08:46:33 -0500
Subject: [Python-Dev] Making types behave like classes
In-Reply-To: <3ABBF553.274D535@ActiveState.com>
Message-ID: <3ABC5EE9.2943.14C818C7@localhost>

[Paul Prescod]
> These are some half-baked ideas about getting classes and types
> to look more similar. I would like to know whether they are
> workable or not and so I present them to the people best equipped
> to tell me.

[expand Py_FindMethod's actions]

>  * __class__ would return for the type object
>  * __add__,__len__, __call__, ... would return a method wrapper
>  around
> the appropriate slot, 	
>  * __init__ might map to a no-op
> 
> I think that Py_FindMethod could even implement inheritance
> between types if we wanted.
> 
> We already do this magic for __methods__ and __doc__. Why not for
> all of the magic methods?

Those are introspective; typically read in the interactive 
interpreter. I can't do anything with them except read them.

If you wrap, eg, __len__, what can I do with it except call it? I 
can already do that with len().

> Benefits:
> 
>  * objects based on extension types would "look more like"
>  classes to
> Python programmers so there is less confusion about how they are
> different

I think it would probably enhance confusion to have the "look 
more like" without "being more like".
 
>  * users could stop using the type() function to get concrete
>  types and
> instead use __class__. After a version or two, type() could be
> formally deprecated in favor of isinstance and __class__.

__class__ is a callable object. It has a __name__. From the 
Python side, a type isn't much more than an address. Until 
Python's object model is redone, there are certain objects for 
which type(o) and o.__class__ return quite different things.
 
>  * we will have started some momentum towards type/class
>  unification
> which we could continue on into __setattr__ and subclassing.

The major lesson I draw from ExtensionClass and friends is 
that achieving this behavior in today's Python is horrendously 
complex and fragile. Until we can do it right, I'd rather keep it 
simple (and keep the warts on the surface).

- Gordon



From moshez at zadka.site.co.il  Sat Mar 24 14:45:32 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 15:45:32 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>
References: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>
Message-ID: <E14goMG-0006bL-00@darjeeling>

On Sat, 24 Mar 2001 08:33:59 -0500, Guido van Rossum <guido at digicool.com> wrote:

> OK, here's what I've done.  I've done a new cvs export of the r21b2
> tag, this time *without* specifying -kv.

This was clearly the solution to *this* problem ;-)
"No code changes in CVS between the same release" sounds like a good
rule.

> The question is, should we bother to make the code robust under
> releases with -kv or not?

Yes.
People *will* be incorporating Python into their own CVS trees. FreeBSD
does it with ports, and Debian are thinking of moving in this direction,
and some Debian maintainers already do that with upstream packages --
Python might be handled like that too.

The only problem I see if that we need to run the test-suite with a -kv'less
export. Fine, this should be part of the release procedure. 
I just went through the core grepping for '$Revision' and it seems this
is the only place this happens -- all the other places either put the default
version (RCS cruft and all), or are smart about handling it.

Since "smart" means just
__version__ = [part for part in "$Revision$".split() if '$' not in part][0]
We can just mandate that, and be safe.

However, whatever we do the Windows build and the UNIX build must be the
same.
I think it should be possible to build the Windows version from the .tgz
and that is what (IMHO) should happen, instead of Tim and Guido exporting
from the CVS independantly. This would stop problems like the one
Tim and I had this (my time) morning.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Sat Mar 24 16:34:13 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 10:34:13 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 15:45:32 +0200."
             <E14goMG-0006bL-00@darjeeling> 
References: <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>  
            <E14goMG-0006bL-00@darjeeling> 
Message-ID: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>

> People *will* be incorporating Python into their own CVS trees. FreeBSD
> does it with ports, and Debian are thinking of moving in this direction,
> and some Debian maintainers already do that with upstream packages --
> Python might be handled like that too.

I haven't seen *any* complaints about this, so is it possible that
they don't mind having the $Revision: ... $ strings in there?

> The only problem I see if that we need to run the test-suite with a
> -kv'less export.  Fine, this should be part of the release
> procedure.  I just went through the core grepping for '$Revision'
> and it seems this is the only place this happens -- all the other
> places either put the default version (RCS cruft and all), or are
> smart about handling it.

Hm.  This means that the -kv version gets *much* less testing than the
regular checkout version.  I've done this before in the past with
other projects and I remember that the bugs produced by this kind of
error are very subtle and not always caught by the test suite.

So I'm skeptical.

> Since "smart" means just
> __version__ = [part for part in "$Revision$".split() if '$' not in part][0]
> We can just mandate that, and be safe.

This is less typing, and no more obscure, and seems to work just as
well given that the only two inputs are "$Revision: 1.9 $" or "1.9":

    __version__ = "$Revision: 1.9 $".split()[-2:][0]

> However, whatever we do the Windows build and the UNIX build must be the
> same.

That's hard right there -- we currently build the Windows compiler
right out of the CVS tree.

> I think it should be possible to build the Windows version from the .tgz
> and that is what (IMHO) should happen, instead of Tim and Guido exporting
> from the CVS independantly. This would stop problems like the one
> Tim and I had this (my time) morning.

Who are you telling us how to work?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Sat Mar 24 16:41:10 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Sat, 24 Mar 2001 17:41:10 +0200
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>
References: <200103241534.KAA27065@cj20424-a.reston1.va.home.com>, <200103241333.IAA25157@cj20424-a.reston1.va.home.com>, <E14ghv5-0003fu-00@darjeeling>  
            <E14goMG-0006bL-00@darjeeling>
Message-ID: <E14gqAA-0006uP-00@darjeeling>

On Sat, 24 Mar 2001 10:34:13 -0500, Guido van Rossum <guido at digicool.com> wrote:

> I haven't seen *any* complaints about this, so is it possible that
> they don't mind having the $Revision: ... $ strings in there?

I don't know.
Like I said, my feelings about that are not very strong...

> > I think it should be possible to build the Windows version from the .tgz
> > and that is what (IMHO) should happen, instead of Tim and Guido exporting
> > from the CVS independantly. This would stop problems like the one
> > Tim and I had this (my time) morning.
> 
> Who are you telling us how to work?

I said "I think" and "IMHO", so I'm covered. I was only giving suggestions.
You're free to ignore them if you think my opinion is without merit.
I happen to think otherwise <8am wink>, but you're the BDFL and I'm not.
Are you saying it's not important to you that the .py's in Windows and
UNIX are the same?
I think it should be a priority, given that when people complain about
OS-independant problems, they often neglect to mention the OS.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From martin at loewis.home.cs.tu-berlin.de  Sat Mar 24 17:49:10 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 17:49:10 +0100
Subject: [Python-Dev] Re: (Don't Read If You're Busy With 2.1b2) "Rich" Comparisons?
In-Reply-To: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>
	(message from Ka-Ping Yee on Sat, 24 Mar 2001 05:33:05 -0800 (PST))
References: <Pine.LNX.4.10.10103240532080.4368-100000@skuld.kingmanhall.org>
Message-ID: <200103241649.f2OGnAa04582@mira.informatik.hu-berlin.de>

> On Sat, 24 Mar 2001, Martin v. Loewis wrote:
> > So given a subset of a lattice, it may not have a maximum, but it will
> > always have a supremum. It appears that the Python max function
> > differs from the mathematical maximum in that respect: max will return
> > a value, even if that is not the "largest value"; the mathematical
> > maximum might give no value.
> 
> Ah, but in Python most collections are usually finite. :)

Even  a  finite collection  may  not  have  a maximum,  which  Moshe's
original example illustrates:

s1 = set(1,4,5)
s2 = set(4,5,6)

max([s1,s2]) == ???

With respect to the subset relation, the collection [s1,s2] has no
maximum; its supremum is set(1,4,5,6). A maximum is only guaranteed to
exist for a finite collection if the order is total.

Regards,
Martin



From barry at digicool.com  Sat Mar 24 18:19:20 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 12:19:20 -0500
Subject: [Python-Dev] test_minidom crash
References: <E14ghv5-0003fu-00@darjeeling>
	<200103241310.IAA21370@cj20424-a.reston1.va.home.com>
Message-ID: <15036.55064.497185.806163@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    >> The bug is in Lib/xml/__init__.py __version__ =
    >> "1.9".split()[1] I don't know what it was supposed to be, but
    >> .split() without an argument splits on whitespace. best guess
    >> is "1.9".split('.') ??

    GvR> This must be because I used "cvs export -kv" to create the
    GvR> tarball this time.  This may warrant a release update :-(

Using "cvs export -kv" is a Good Idea for a release!  That's because
if others import the release into their own CVS, or pull the file into
an unrelated CVS repository, your revision numbers are preserved.

I haven't followed this thread very carefully, but isn't there a
better way to fix the problem rather than stop using -kv (I'm not sure
that's what Guido has in mind)?

-Barry



From martin at loewis.home.cs.tu-berlin.de  Sat Mar 24 18:30:46 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sat, 24 Mar 2001 18:30:46 +0100
Subject: [Python-Dev] test_minidom crash
Message-ID: <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de>

[Moshe]
> I just went through the core grepping for '$Revision' and it seems
> this is the only place this happens -- all the other places either
> put the default version (RCS cruft and all), or are smart about
> handling it.

You have not search carefully enough. pyexpat.c has

    char *rev = "$Revision: 2.44 $";
...
    PyModule_AddObject(m, "__version__",
                       PyString_FromStringAndSize(rev+11, strlen(rev+11)-2));

> I haven't seen *any* complaints about this, so is it possible that
> they don't mind having the $Revision: ... $ strings in there?

The problem is that they don't know the problems they run into
(yet). E.g. if they import pyexpat.c into their tree, they get
1.1.1.1; even after later imports, they still get 1.x. Now, PyXML
currently decides that the Python pyexpat is not good enough if it is
older than 2.39. In turn, they might get different code being used
when installing out of their CVS as compared to installing from the
source distributions.

That all shouldn't cause problems, but it would probably help if
source releases continue to use -kv; then likely every end-user will
get the same sources. I'd volunteer to review the core sources (and
produce patches) if that is desired.

Regards,
Martin



From barry at digicool.com  Sat Mar 24 18:33:47 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 12:33:47 -0500
Subject: [Python-Dev] test_minidom crash
References: <E14ghv5-0003fu-00@darjeeling>
	<200103241333.IAA25157@cj20424-a.reston1.va.home.com>
Message-ID: <15036.55931.367420.983599@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> The question is, should we bother to make the code robust
    GvR> under releases with -kv or not?

Yes.
    
    GvR> I used to write code that dealt with the fact that
    GvR> __version__ could be either "$Release: 1.9$" or "1.9", but
    GvR> clearly that bit of arcane knowledge got lost.

Time to re-educate then!

On the one hand, I personally try to avoid assigning __version__ from
a CVS revision number because I'm usually interested in a more
confederated release.  I.e. mimelib 0.2 as opposed to
mimelib/mimelib/__init__.py revision 1.4.  If you want the CVS
revision of the file to be visible in the file, use a different global
variable, or stick it in a comment and don't worry about sucking out
just the numbers.

OTOH, I understand this is a convenient way to not have to munge
version numbers so lots of people do it (I guess).

Oh, I see there are other followups to this thread, so I'll shut up
now.  I think Guido's split() idiom is the Right Thing To Do; it works
with branch CVS numbers too:

>>> "$Revision: 1.9.4.2 $".split()[-2:][0]
'1.9.4.2'
>>> "1.9.4.2".split()[-2:][0]
'1.9.4.2'

-Barry



From guido at digicool.com  Sat Mar 24 19:13:45 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 13:13:45 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 12:19:20 EST."
             <15036.55064.497185.806163@anthem.wooz.org> 
References: <E14ghv5-0003fu-00@darjeeling> <200103241310.IAA21370@cj20424-a.reston1.va.home.com>  
            <15036.55064.497185.806163@anthem.wooz.org> 
Message-ID: <200103241813.NAA27426@cj20424-a.reston1.va.home.com>

> Using "cvs export -kv" is a Good Idea for a release!  That's because
> if others import the release into their own CVS, or pull the file into
> an unrelated CVS repository, your revision numbers are preserved.

I know, but I doubt that htis is used much any more.  I haven't had
any complaints about this, and I know that we didn't use -kv for
previous releases (I checked 1.5.2, 1.6 and 2.0).

> I haven't followed this thread very carefully, but isn't there a
> better way to fix the problem rather than stop using -kv (I'm not sure
> that's what Guido has in mind)?

Well, if we only us -kv to create the final tarball and installer, and
everybody else uses just the CVS version, the problem is that we don't
have enough testing time in.

Given that most code is written to deal with "$Revision: 1.9 $", why
bother breaking it?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Sat Mar 24 19:14:51 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sat, 24 Mar 2001 13:14:51 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: Your message of "Sat, 24 Mar 2001 18:30:46 +0100."
             <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de> 
References: <200103241730.f2OHUkJ04865@mira.informatik.hu-berlin.de> 
Message-ID: <200103241814.NAA27441@cj20424-a.reston1.va.home.com>

> That all shouldn't cause problems, but it would probably help if
> source releases continue to use -kv; then likely every end-user will
> get the same sources. I'd volunteer to review the core sources (and
> produce patches) if that is desired.

I'm not sure if it's a matter of "continue to use" -- as I said, 1.5.2
and later releases haven't used -kv.

Nevertheless, patches to fix this will be most welcome.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From tim.one at home.com  Sat Mar 24 21:49:46 2001
From: tim.one at home.com (Tim Peters)
Date: Sat, 24 Mar 2001 15:49:46 -0500
Subject: [Python-Dev] test_minidom crash
In-Reply-To: <E14goMG-0006bL-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCAEBMJIAA.tim.one@home.com>

[Moshe]
> ...
> I just went through the core grepping for '$Revision' and it seems
> this is the only place this happens -- all the other places either put
> the default version (RCS cruft and all), or are smart about handling it.

Hmm.  Unless it's in a *comment*, I expect most uses are dubious.  Clear
example, from the new Lib/unittest.py:

__version__ = "$Revision: 1.2 $"[11:-2]

Presumably that's yielding an empty string under the new tarball release.

One of a dozen fuzzy examples, from pickle.py:

__version__ = "$Revision: 1.46 $"       # Code version

The module makes no other use of this, and since it's not in a comment I have
to presume that the author *intended* clients to access pickle.__version__
directly.  But, if so, they've been getting the $Revision business for years,
so changing the released format now could break users' code.

> ...
> However, whatever we do the Windows build and the UNIX build must be
> the same.

*Sounds* good <wink>.

> I think it should be possible to build the Windows version from the
> .tgz and that is what (IMHO) should happen, instead of Tim and Guido
> exporting from the CVS independantly. This would stop problems like the
> one Tim and I had this (my time) morning.

Ya, sounds good too.  A few things against it:  The serialization would add
hours to the release process, in part because I get a lot of testing done
now, on the Python I install *from* the Windows installer I build, while the
other guys are finishing the .tgz business (note that Guido doesn't similarly
run tests on a Python built from the tarball, else he would have caught this
problem before you!).

Also in part because the Windows installer is not a simple packaging of the
source tree:  the Windows version also ships with pre-compiled components for
Tcl/Tk, zlib, bsddb and pyexpat.  The source for that stuff doesn't come in
the tarball; it has to be sprinkled "by hand" into the source tree.

The last gets back to Guido's point, which is also a good one:  if the
Windows release gets built from a tree I've used for the very first time a
couple hours before the release, the higher the odds that a process screwup
gets overlooked.

To date, there have been no "process bugs" in the Windows build process, and
I'd be loathe to give that up.  Building from the tree I use every day is ...
reassuring.

At heart, I don't much like the idea of using source revision numbers as code
version numbers anyway -- "New and Improved!  Version 1.73 stripped a
trailing space from line 239!" <wink>.

more-info-than-anyone-needs-to-know-ly y'rs  - tim




From paul at pfdubois.com  Sat Mar 24 23:14:03 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Sat, 24 Mar 2001 14:14:03 -0800
Subject: [Python-Dev] distutils change breaks code, Pyfort
Message-ID: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>

The requirement of a version argument to the distutils command breaks Pyfort
and many of my existing packages. These packages are not intended for use
with the distribution commands and a package version number would be
meaningless.

I will make a new Pyfort that supplies a version number to the call it makes
to setup. However, I think this change to distutils is a poor idea. If the
version number would be required for the distribution commands, let *them*
complain, perhaps by setting a default value of time.asctime(time.gmtime())
or something that the distribution commands could object to.

I apologize if I missed an earlier discussion of this change that seems to
be in 2.1b2 but not 2.1b1, as I am new to this list.

Paul





From jafo at tummy.com  Sun Mar 25 00:17:35 2001
From: jafo at tummy.com (Sean Reifschneider)
Date: Sat, 24 Mar 2001 16:17:35 -0700
Subject: [Python-Dev] RFC: PEP243: Module Repository Upload Mechanism
Message-ID: <20010324161735.A19818@tummy.com>

Included below is the version of PEP243 after it's initial round of review.
I welcome any feedback.

Thanks,
Sean

============================================================================
PEP: 243
Title: Module Repository Upload Mechanism
Version: $Revision$
Author: jafo-pep at tummy.com (Sean Reifschneider)
Status: Draft
Type: Standards Track
Created: 18-Mar-2001
Python-Version: 2.1
Post-History: 
Discussions-To: distutils-sig at python.org


Abstract

    For a module repository system (such as Perl's CPAN) to be
    successful, it must be as easy as possible for module authors to
    submit their work.  An obvious place for this submit to happen is
    in the Distutils tools after the distribution archive has been
    successfully created.  For example, after a module author has
    tested their software (verifying the results of "setup.py sdist"),
    they might type "setup.py sdist --submit".  This would flag
    Distutils to submit the source distribution to the archive server
    for inclusion and distribution to the mirrors.

    This PEP only deals with the mechanism for submitting the software
    distributions to the archive, and does not deal with the actual
    archive/catalog server.


Upload Process

    The upload will include the Distutils "PKG-INFO" meta-data
    information (as specified in PEP-241 [1]), the actual software
    distribution, and other optional information.  This information
    will be uploaded as a multi-part form encoded the same as a
    regular HTML file upload request.  This form is posted using
    ENCTYPE="multipart/form-data" encoding [RFC1867].

    The upload will be made to the host "modules.python.org" on port
    80/tcp (POST http://modules.python.org:80/swalowpost.cgi).  The form
    will consist of the following fields:

        distribution -- The file containing the module software (for
        example, a .tar.gz or .zip file).

        distmd5sum -- The MD5 hash of the uploaded distribution,
        encoded in ASCII representing the hexadecimal representation
        of the digest ("for byte in digest: s = s + ('%02x' %
        ord(byte))").

        pkginfo (optional) -- The file containing the distribution
        meta-data (as specified in PEP-241 [1]).  Note that if this is not
        included, the distribution file is expected to be in .tar format
        (gzipped and bzipped compreesed are allowed) or .zip format, with a
        "PKG-INFO" file in the top-level directory it extracts
        ("package-1.00/PKG-INFO").

        infomd5sum (required if pkginfo field is present) -- The MD5 hash
        of the uploaded meta-data, encoded in ASCII representing the
        hexadecimal representation of the digest ("for byte in digest:
        s = s + ('%02x' % ord(byte))").

        platform (optional) -- A string representing the target
        platform for this distribution.  This is only for binary
        distributions.  It is encoded as
        "<os_name>-<os_version>-<platform architecture>-<python
        version>".

        signature (optional) -- A OpenPGP-compatible signature [RFC2440]
        of the uploaded distribution as signed by the author.  This may be
        used by the cataloging system to automate acceptance of uploads.

        protocol_version -- A string indicating the protocol version that
        the client supports.  This document describes protocol version "1".


Return Data

    The status of the upload will be reported using HTTP non-standard
    ("X-*)" headers.  The "X-Swalow-Status" header may have the following
    values:

        SUCCESS -- Indicates that the upload has succeeded.

        FAILURE -- The upload is, for some reason, unable to be
        processed.

        TRYAGAIN -- The server is unable to accept the upload at this
        time, but the client should try again at a later time.
        Potential causes of this are resource shortages on the server,
        administrative down-time, etc...

    Optionally, there may be a "X-Swalow-Reason" header which includes a
    human-readable string which provides more detailed information about
    the "X-Swalow-Status".

    If there is no "X-Swalow-Status" header, or it does not contain one of
    the three strings above, it should be treated as a temporary failure.

    Example:

        >>> f = urllib.urlopen('http://modules.python.org:80/swalowpost.cgi')
        >>> s = f.headers['x-swalow-status']
        >>> s = s + ': ' + f.headers.get('x-swalow-reason', '<None>')
        >>> print s
        FAILURE: Required field "distribution" missing.


Sample Form

    The upload client must submit the page in the same form as
    Netscape Navigator version 4.76 for Linux produces when presented
    with the following form:

        <H1>Upload file</H1>
        <FORM NAME="fileupload" METHOD="POST" ACTION="swalowpost.cgi"
              ENCTYPE="multipart/form-data">
        <INPUT TYPE="file" NAME="distribution"><BR>
        <INPUT TYPE="text" NAME="distmd5sum"><BR>
        <INPUT TYPE="file" NAME="pkginfo"><BR>
        <INPUT TYPE="text" NAME="infomd5sum"><BR>
        <INPUT TYPE="text" NAME="platform"><BR>
        <INPUT TYPE="text" NAME="signature"><BR>
        <INPUT TYPE="hidden" NAME="protocol_version" VALUE="1"><BR>
        <INPUT TYPE="SUBMIT" VALUE="Upload">
        </FORM>


Platforms

    The following are valid os names:

        aix beos debian dos freebsd hpux mac macos mandrake netbsd
        openbsd qnx redhat solaris suse windows yellowdog

    The above include a number of different types of distributions of
    Linux.  Because of versioning issues these must be split out, and
    it is expected that when it makes sense for one system to use
    distributions made on other similar systems, the download client
    will make the distinction.

    Version is the official version string specified by the vendor for
    the particular release.  For example, "2000" and "nt" (Windows),
    "9.04" (HP-UX), "7.0" (RedHat, Mandrake).

    The following are valid architectures:

        alpha hppa ix86 powerpc sparc ultrasparc


Status

    I currently have a proof-of-concept client and server implemented.
    I plan to have the Distutils patches ready for the 2.1 release.
    Combined with Andrew's PEP-241 [1] for specifying distribution
    meta-data, I hope to have a platform which will allow us to gather
    real-world data for finalizing the catalog system for the 2.2
    release.


References

    [1] Metadata for Python Software Package, Kuchling,
        http://python.sourceforge.net/peps/pep-0241.html

    [RFC1867] Form-based File Upload in HTML
        http://www.faqs.org/rfcs/rfc1867.html

    [RFC2440] OpenPGP Message Format
        http://www.faqs.org/rfcs/rfc2440.html


Copyright

    This document has been placed in the public domain.



Local Variables:
mode: indented-text
indent-tabs-mode: nil
End:
-- 
 A smart terminal is not a smart*ass* terminal, but rather a terminal
 you can educate.  -- Rob Pike
Sean Reifschneider, Inimitably Superfluous <jafo at tummy.com>
tummy.com - Linux Consulting since 1995. Qmail, KRUD, Firewalls, Python



From martin at loewis.home.cs.tu-berlin.de  Sun Mar 25 01:47:26 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Sun, 25 Mar 2001 01:47:26 +0100
Subject: [Python-Dev] distutils change breaks code, Pyfort
Message-ID: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>

> The  requirement of  a  version argument  to  the distutils  command
> breaks Pyfort and  many of my existing packages.  These packages are
> not intended  for use with  the distribution commands and  a package
> version number would be meaningless.

So  this  is  clearly  an  incompatible  change.  According  with  the
procedures in PEP 5, there  should be a warning issued before aborting
setup. Later  (major) releases of  Python, or distutils,  could change
the warning into an error.

Nevertheless, I agree with the  change in principal. Distutils can and
should  enforce a  certain  amount  of policy;  among  this, having  a
version number sounds like a  reasonable requirement - even though its
primary  use is for  building (and  uploading) distributions.  Are you
saying that  Pyfort does not have a  version number? On SF,  I can get
version 6.3...

Regards,
Martin



From paul at pfdubois.com  Sun Mar 25 03:43:52 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Sat, 24 Mar 2001 17:43:52 -0800
Subject: [Python-Dev] RE: distutils change breaks code, Pyfort
In-Reply-To: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>
Message-ID: <ADEOIFHFONCLEEPKCACCAEDNCHAA.paul@pfdubois.com>

Pyfort is the kind of package the change was intended for, and it does have
a version number. But I have other packages, that cannot stand on their own,
that are part of a bigger suite of packages, and dist is never going to be
used. They don't have a MANIFEST, etc. The setup.py file is used instead of
a Makefile. I don't think that it is logical to require a version number
that is not used in that case. We also raise the "entry fee" for learning to
use Distutils or starting a new package.

In the case of Pyfort there is NO setup.py, it is just running a command on
the fly. But I've already fixed it with version 6.3.

I think we have all focused on the public distribution problem but in fact
Distutils is just great as an internal tool for building large software
projects and that is how I use it. I agree that if I want to use sdist,
bdist etc. that I need to set the version. But then, I need to do other
things too in that case.

-----Original Message-----
From: Martin v. Loewis [mailto:martin at loewis.home.cs.tu-berlin.de]
Sent: Saturday, March 24, 2001 4:47 PM
To: paul at pfdubois.com
Cc: python-dev at python.org
Subject: distutils change breaks code, Pyfort


> The  requirement of  a  version argument  to  the distutils  command
> breaks Pyfort and  many of my existing packages.  These packages are
> not intended  for use with  the distribution commands and  a package
> version number would be meaningless.

So  this  is  clearly  an  incompatible  change.  According  with  the
procedures in PEP 5, there  should be a warning issued before aborting
setup. Later  (major) releases of  Python, or distutils,  could change
the warning into an error.

Nevertheless, I agree with the  change in principal. Distutils can and
should  enforce a  certain  amount  of policy;  among  this, having  a
version number sounds like a  reasonable requirement - even though its
primary  use is for  building (and  uploading) distributions.  Are you
saying that  Pyfort does not have a  version number? On SF,  I can get
version 6.3...

Regards,
Martin




From barry at digicool.com  Sun Mar 25 05:06:21 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Sat, 24 Mar 2001 22:06:21 -0500
Subject: [Python-Dev] RE: distutils change breaks code, Pyfort
References: <200103250047.f2P0lQn00987@mira.informatik.hu-berlin.de>
	<ADEOIFHFONCLEEPKCACCAEDNCHAA.paul@pfdubois.com>
Message-ID: <15037.24749.117157.228368@anthem.wooz.org>

>>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:

    PFD> I think we have all focused on the public distribution
    PFD> problem but in fact Distutils is just great as an internal
    PFD> tool for building large software projects and that is how I
    PFD> use it.

I've used it this way too, and you're right, it's great for this.
Esp. for extensions, it's much nicer than fiddling with
Makefile.pre.in's etc.  So I think I agree with you about the version
numbers and other required metadata -- or at least, there should be an
escape.

-Barry



From tim.one at home.com  Sun Mar 25 07:07:20 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 25 Mar 2001 00:07:20 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010321214432.A25810@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>

[Neil Schemenauer]
> Apparently they [Icon-style generators] are good for lots of other
> things too.  Tonight I implemented passing values using resume().
>  Next, I decided to see if I had enough magic juice to tackle the
> coroutine example from Gordon's stackless tutorial.  Its turns out
> that I didn't need the extra functionality.  Generators are enough.
>
> The code is not too long so I've attached it.  I figure that some
> people might need a break from 2.1 release issues.

I'm afraid we were buried alive under them at the time, and I don't want this
one to vanish in the bit bucket!

> I think the generator version is even simpler than the coroutine
> version.
>
> [Example code for the Dahl/Hoare "squasher" program elided -- see
>  the archive]

This raises a potentially interesting point:  is there *any* application of
coroutines for which simple (yield-only-to-immediate-caller) generators
wouldn't suffice, provided that they're explicitly resumable?

I suspect there isn't.  If you give me a coroutine program, and let me add a
"control loop", I can:

1. Create an Icon-style generator for each coroutine "before the loop".

2. Invoke one of the coroutines "before the loop".

3. Replace each instance of

       coroutine_transfer(some_other_coroutine, some_value)

   within the coroutines by

       yield some_other_coroutine, some_value

4. The "yield" then returns to the control loop, which picks apart
   the tuple to find the next coroutine to resume and the value to
   pass to it.

This starts to look a lot like uthreads, but built on simple generator
yield/resume.

It loses some things:

A. Coroutine A can't *call* routine B and have B do a co-transfer
   directly.  But A *can* invoke B as a generator and have B yield
   back to A, which in turn yields back to its invoker ("the control
   loop").

B. As with recursive Icon-style generators, a partial result generated
   N levels deep in the recursion has to suspend its way thru N
   levels of frames, and resume its way back down N levels of frames
   to get moving again.  Real coroutines can transmit results directly
   to the ultimate consumer.

OTOH, it may gain more than it loses:

A. Simple to implement in CPython without threads, and at least
   possible likewise even for Jython.

B. C routines "in the middle" aren't necessarily show-stoppers.  While
   they can't exploit Python's implementation of generators directly,
   they *could* participate in the yield/resume *protocol*, acting "as
   if" they were Python routines.  Just like Python routines have to
   do today, C routines would have to remember their own state and
   arrange to save/restore it appropriately across calls (but to the
   C routines, they *are* just calls and returns, and nothing trickier
   than that -- their frames truly vanish when "suspending up", so
   don't get in the way).

the-meek-shall-inherit-the-earth<wink>-ly y'rs  - tim




From nas at arctrix.com  Sun Mar 25 07:47:48 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Sat, 24 Mar 2001 21:47:48 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>; from tim.one@home.com on Sun, Mar 25, 2001 at 12:07:20AM -0500
References: <20010321214432.A25810@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCCEDBJIAA.tim.one@home.com>
Message-ID: <20010324214748.A32161@glacier.fnational.com>

On Sun, Mar 25, 2001 at 12:07:20AM -0500, Tim Peters wrote:
> If you give me a coroutine program, and let me add a "control
> loop", ...

This is exactly what I started doing when I was trying to rewrite
your Coroutine.py module to use generators.

> A. Simple to implement in CPython without threads, and at least
>    possible likewise even for Jython.

I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
and frame.resume() low level interface is nice.  I think Jython
must know which frames are going to be suspended at compile time.
That makes it hard to build higher level control abstractions.  I
don't know much about Jython though so maybe there's another way.
In any case it should be possible to use threads to implement
some common higher level interfaces.

  Neil



From tim.one at home.com  Sun Mar 25 08:11:58 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 25 Mar 2001 01:11:58 -0500
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <20010324214748.A32161@glacier.fnational.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>

[Tim]
>> If you give me a coroutine program, and let me add a "control
>> loop", ...

[Neil Schemenauer]
> This is exactly what I started doing when I was trying to rewrite
> your Coroutine.py module to use generators.

Ya, I figured as much -- for a Canadian, you don't drool much <wink>.

>> A. Simple to implement in CPython without threads, and at least
>>    possible likewise even for Jython.

> I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
> and frame.resume() low level interface is nice.  I think Jython
> must know which frames are going to be suspended at compile time.

Yes, Samuele said as much.  My belief is that generators don't become *truly*
pleasant unless "yield" ("suspend"; whatever) is made a new statement type.
Then Jython knows exactly where yields can occur.  As in CLU (but not Icon),
it would also be fine by me if routines *used* as generators also needed to
be explicitly marked as such (this is a non-issue in Icon because *every*
Icon expression "is a generator" -- there is no other kind of procedure
there).

> That makes it hard to build higher level control abstractions.
> I don't know much about Jython though so maybe there's another way.
> In any case it should be possible to use threads to implement
> some common higher level interfaces.

What I'm wondering is whether I care <0.4 wink>.  I agreed with you, e.g.,
that your squasher example was more pleasant to read using generators than in
its original coroutine form.  People who want to invent brand new control
structures will be happier with Scheme anyway.




From tim.one at home.com  Sun Mar 25 10:07:09 2001
From: tim.one at home.com (Tim Peters)
Date: Sun, 25 Mar 2001 03:07:09 -0500
Subject: FW: FW: [Python-Dev] Simple generator implementation 
In-Reply-To: <200103210423.VAA20300@localhost.localdomain>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEDGJIAA.tim.one@home.com>

[Tim]
>> The correspondent I quoted believed the latter ["simple" generators]
>> were on-target for XSLT work ... But ... I don't know whether they're
>> sufficient for what you have in mind.

[Uche Ogbuji]
> Based on a discussion with Christian at IPC9, they are.  I should
> have been more clear about that.  My main need is to be able to change
> a bit of context and invoke a different execution path, without going
> through the full overhead of a function call.  XSLT, if written
> naturally", tends to involve huge numbers of such tweak-context-and-
> branch operations.
> ...
> Suspending only to the invoker should do the trick because it is
> typically a single XSLT instruction that governs multiple tree-
> operations with varied context.

Thank you for explaining more!  It's helpful.

> At IPC9, Guido put up a poll of likely use of stackless features,
> and it was a pretty clear arithmetic progression from those who
> wanted to use microthreads, to those who wanted co-routines, to
> those who wanted just generators.  The generator folks were
> probably 2/3 of the assembly.  Looks as if many have decided,
> and they seem to agree with you.

They can't:  I haven't taken a position <0.5 wink>.  As I said, I'm trying to
get closer to understanding the cost/benefit tradeoffs here.

I've been nagging in favor of simple generators for a decade now, and every
time I've tried they've gotten hijacked by some grander scheme with much
muddier tradeoffs.  That's been very frustrating, since I've had good uses
for simple generators darned near every day of my Python life, and "the only
thing stopping them" has been a morbid fascination with Scheme's mistakes
<wink>.  That phase appears to be over, and *now* "the only thing stopping
them" appears to be a healthy fascination with coroutines and uthreads.
That's cool, although this is definitely a "the perfect is the enemy of the
good" kind of thing.

trying-leave-a-better-world-for-the-children<wink>-ly y'rs  - tim




From paulp at ActiveState.com  Sun Mar 25 20:30:34 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Sun, 25 Mar 2001 10:30:34 -0800
Subject: [Python-Dev] Making types behave like classes
References: <3ABC5EE9.2943.14C818C7@localhost>
Message-ID: <3ABE3949.DE50540C@ActiveState.com>

Gordon McMillan wrote:
> 
>...
> 
> Those are introspective; typically read in the interactive
> interpreter. I can't do anything with them except read them.
>
> If you wrap, eg, __len__, what can I do with it except call it? 

You can store away a reference to it and then call it later.

I
> can already do that with len().
> 
> > Benefits:
> >
> >  * objects based on extension types would "look more like"
> >  classes to
> > Python programmers so there is less confusion about how they are
> > different
> 
> I think it would probably enhance confusion to have the "look
> more like" without "being more like".

Looking more like is the same as being more like. In other words, there
are a finite list of differences in behavior between types and classes
and I think we should chip away at them one by one with each release of
Python.

Do you think that there is a particular difference (perhaps relating to
subclassing) that is the "real" difference and the rest are just
cosmetic?

> >  * users could stop using the type() function to get concrete
> >  types and
> > instead use __class__. After a version or two, type() could be
> > formally deprecated in favor of isinstance and __class__.
> 
> __class__ is a callable object. It has a __name__. From the
> Python side, a type isn't much more than an address. 

Type objects also have names. They are not (yet) callable but I cannot
think of a circumstance in which that would matter. It would require
code like this:

cls = getattr(foo, "__class__", None)
if cls:
    cls(...)

I don't know where the arglist for cls would come from. In general, I
can't imagine what the goal of this code would be. I can see code like
this in a "closed world" situation where I know all of the classes
involved, but I can't imagine a case where this kind of code will work
with any old class.

Anyhow, I think that type objects should be callable just like
classes...but I'm trying to pick off low-hanging fruit first. I think
that the less "superficial" differences there are between types and
classes, the easier it becomes to tackle the deep differences because
more code out there will be naturally polymorphic instead of using: 

if type(obj) is InstanceType: 
	do_onething() 
else: 
	do_anotherthing()

That is an evil pattern if we are going to merge types and classes.

> Until
> Python's object model is redone, there are certain objects for
> which type(o) and o.__class__ return quite different things.

I am very nervous about waiting for a big-bang re-model of the object
model.

>...
> The major lesson I draw from ExtensionClass and friends is
> that achieving this behavior in today's Python is horrendously
> complex and fragile. Until we can do it right, I'd rather keep it
> simple (and keep the warts on the surface).

I'm trying to find an incremental way forward because nobody seems to
have time or energy for a big bang.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From greg at cosc.canterbury.ac.nz  Sun Mar 25 23:53:02 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Mon, 26 Mar 2001 09:53:02 +1200 (NZST)
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: <m14gEEA-000CnEC@artcom0.artcom-gmbh.de>
Message-ID: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz>

pf at artcom-gmbh.de (Peter Funk):

> All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> in this respect.

I don't think you can call that a "flaw", given that these
filemanagers are only designed to deal with Unix file systems.

I think it's reasonable to only expect things in the platform
os module to deal with the platform's native file system.
Trying to anticipate how every platform's cross-platform
file servers for all other platforms are going to store their
data just isn't practical.

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From guido at digicool.com  Mon Mar 26 04:03:52 2001
From: guido at digicool.com (Guido van Rossum)
Date: Sun, 25 Mar 2001 21:03:52 -0500
Subject: Alleged deprecation of shutils (Re: [Python-Dev] Function in os module for available disk space, why)
In-Reply-To: Your message of "Mon, 26 Mar 2001 09:53:02 +1200."
             <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> 
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> 
Message-ID: <200103260203.VAA05048@cj20424-a.reston1.va.home.com>

> > All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> > in this respect.
> 
> I don't think you can call that a "flaw", given that these
> filemanagers are only designed to deal with Unix file systems.
> 
> I think it's reasonable to only expect things in the platform
> os module to deal with the platform's native file system.
> Trying to anticipate how every platform's cross-platform
> file servers for all other platforms are going to store their
> data just isn't practical.

You say that now, but as such cross-system servers become more common,
we should expect the tools to deal with them well, rather than
complain "the other guy doesn't play by our rules".

--Guido van Rossum (home page: http://www.python.org/~guido/)



From gmcm at hypernet.com  Mon Mar 26 04:44:59 2001
From: gmcm at hypernet.com (Gordon McMillan)
Date: Sun, 25 Mar 2001 21:44:59 -0500
Subject: [Python-Dev] Making types behave like classes
In-Reply-To: <3ABE3949.DE50540C@ActiveState.com>
Message-ID: <3ABE66DB.18389.1CB7239A@localhost>

[Gordon]
> > I think it would probably enhance confusion to have the "look
> > more like" without "being more like".
[Paul] 
> Looking more like is the same as being more like. In other words,
> there are a finite list of differences in behavior between types
> and classes and I think we should chip away at them one by one
> with each release of Python.

There's only one difference that matters: subclassing. I don't 
think there's an incremental path to that that leaves Python 
"easily extended".

[Gordon]
> > __class__ is a callable object. It has a __name__. From the
> > Python side, a type isn't much more than an address. 
> 
> Type objects also have names. 

But not a __name__.

> They are not (yet) callable but I
> cannot think of a circumstance in which that would matter. 

Take a look at copy.py.

> Anyhow, I think that type objects should be callable just like
> classes...but I'm trying to pick off low-hanging fruit first. I
> think that the less "superficial" differences there are between
> types and classes, the easier it becomes to tackle the deep
> differences because more code out there will be naturally
> polymorphic instead of using: 
> 
> if type(obj) is InstanceType: 
>  do_onething() 
> else: 
>  do_anotherthing()
> 
> That is an evil pattern if we are going to merge types and
> classes.

And it would likely become:
 if callable(obj.__class__):
   ....

Explicit is better than implicit for warts, too.
 


- Gordon



From moshez at zadka.site.co.il  Mon Mar 26 12:27:37 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 26 Mar 2001 12:27:37 +0200
Subject: [Python-Dev] sandbox?
Message-ID: <E14hUDp-0003tf-00@darjeeling>

I remember there was the discussion here about sandbox, but
I'm not sure I understand the rules. Checkin without asking
permission to sandbox ok? Just make my private dir and checkin
stuff?

Anybody who feels he can speak with authority is welcome ;-)
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From mwh21 at cam.ac.uk  Mon Mar 26 15:18:26 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 26 Mar 2001 14:18:26 +0100
Subject: [Python-Dev] Re: Alleged deprecation of shutils
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com>
Message-ID: <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>

Guido van Rossum <guido at digicool.com> writes:

> > > All current filemanagers on Linux (KDE kfm, Gnome gmc) are flawed 
> > > in this respect.
> > 
> > I don't think you can call that a "flaw", given that these
> > filemanagers are only designed to deal with Unix file systems.
> > 
> > I think it's reasonable to only expect things in the platform
> > os module to deal with the platform's native file system.
> > Trying to anticipate how every platform's cross-platform
> > file servers for all other platforms are going to store their
> > data just isn't practical.
> 
> You say that now, but as such cross-system servers become more common,
> we should expect the tools to deal with them well, rather than
> complain "the other guy doesn't play by our rules".

So, a goal for 2.2: getting moving/copying/deleting of files and
directories working properly (ie. using native APIs) on all major
supported platforms, with all the legwork that implies.  We're not
really very far from this now, are we?  Perhaps (the functionality of)
shutil.{rmtree,copy,copytree} should move into os and if necessary be
implemented in nt or dos or mac or whatever.  Any others?

Cheers,
M.

-- 
39. Re graphics:  A picture is worth 10K  words - but only those
    to describe the picture. Hardly any sets of 10K words can be
    adequately described with pictures.
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From jack at oratrix.nl  Mon Mar 26 16:26:41 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 26 Mar 2001 16:26:41 +0200
Subject: [Python-Dev] Re: Alleged deprecation of shutils 
In-Reply-To: Message by Michael Hudson <mwh21@cam.ac.uk> ,
	     26 Mar 2001 14:18:26 +0100 , <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <20010326142642.48DE836B2C0@snelboot.oratrix.nl>

> > You say that now, but as such cross-system servers become more common,
> > we should expect the tools to deal with them well, rather than
> > complain "the other guy doesn't play by our rules".
> 
> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.

Well, if we want to support the case Guido sketches, a machine on one platform 
being fileserver for another platform, things may well be bleak.

For instance, most Apple-fileservers for Unix will use the .HSResource 
directory to store resource forks and the .HSancillary file to store mac 
file-info, but not all do. I didn't try it yet, but from what I've read MacOSX 
over NFS uses a different scheme.

But, all that said, if we look only at a single platform the basic 
functionality of shutils should work. There's a Mac module (macostools) that 
has most of the functionality, but of course not all, and it has some extra as 
well, and not all names are the same (shutil compatibility wasn't a goal when 
it was written).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 





From guido at digicool.com  Mon Mar 26 16:33:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 09:33:00 -0500
Subject: [Python-Dev] sandbox?
In-Reply-To: Your message of "Mon, 26 Mar 2001 12:27:37 +0200."
             <E14hUDp-0003tf-00@darjeeling> 
References: <E14hUDp-0003tf-00@darjeeling> 
Message-ID: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>

> I remember there was the discussion here about sandbox, but
> I'm not sure I understand the rules. Checkin without asking
> permission to sandbox ok? Just make my private dir and checkin
> stuff?
> 
> Anybody who feels he can speak with authority is welcome ;-)

We appreciate it if you ask first, but yes, sandbox is just what it
says.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 26 17:32:09 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 10:32:09 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: Your message of "26 Mar 2001 14:18:26 +0100."
             <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk> 
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com>  
            <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk> 
Message-ID: <200103261532.KAA06398@cj20424-a.reston1.va.home.com>

> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.  We're not
> really very far from this now, are we?  Perhaps (the functionality of)
> shutil.{rmtree,copy,copytree} should move into os and if necessary be
> implemented in nt or dos or mac or whatever.  Any others?

Given that it's currently in shutil, please just consider improving
that, unless you believe that the basic API should be completely
different.  This sounds like something PEP-worthy!

--Guido van Rossum (home page: http://www.python.org/~guido/)



From moshez at zadka.site.co.il  Mon Mar 26 17:49:10 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Mon, 26 Mar 2001 17:49:10 +0200
Subject: [Python-Dev] sandbox?
In-Reply-To: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>
References: <200103261433.JAA06244@cj20424-a.reston1.va.home.com>, <E14hUDp-0003tf-00@darjeeling>
Message-ID: <E14hZF0-0004Mj-00@darjeeling>

On Mon, 26 Mar 2001 09:33:00 -0500, Guido van Rossum <guido at digicool.com> wrote:
 
> We appreciate it if you ask first, but yes, sandbox is just what it
> says.

OK, thanks.
I want to checkin my Rational class to the sandbox, probably make
a directory rational/ and put it there.
 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From jeremy at alum.mit.edu  Mon Mar 26 19:57:26 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Mon, 26 Mar 2001 12:57:26 -0500 (EST)
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
References: <20010324214748.A32161@glacier.fnational.com>
	<LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
Message-ID: <15039.33542.399553.604556@slothrop.digicool.com>

>>>>> "TP" == Tim Peters <tim.one at home.com> writes:

  >> I'm not sure about Jython.  The sys._getframe(), frame.suspend(),
  >> and frame.resume() low level interface is nice.  I think Jython
  >> must know which frames are going to be suspended at compile time.

  TP> Yes, Samuele said as much.  My belief is that generators don't
  TP> become *truly* pleasant unless "yield" ("suspend"; whatever) is
  TP> made a new statement type.  Then Jython knows exactly where
  TP> yields can occur.  As in CLU (but not Icon), it would also be
  TP> fine by me if routines *used* as generators also needed to be
  TP> explicitly marked as such (this is a non-issue in Icon because
  TP> *every* Icon expression "is a generator" -- there is no other
  TP> kind of procedure there).

If "yield" is a keyword, then any function that uses yield is a
generator.  With this policy, it's straightforward to determine which
functions are generators at compile time.  It's also Pythonic:
Assignment to a name denotes local scope; use of yield denotes
generator. 

Jeremy



From jeremy at digicool.com  Mon Mar 26 21:49:31 2001
From: jeremy at digicool.com (Jeremy Hylton)
Date: Mon, 26 Mar 2001 14:49:31 -0500 (EST)
Subject: [Python-Dev] SF bugs tracker?
Message-ID: <15039.40267.489930.186757@localhost.localdomain>

I've been unable to reach the bugs tracker today.  Every attempt
results in a document-contains-no-data error.  Has anyone else had any
luck?

Jeremy




From jack at oratrix.nl  Mon Mar 26 21:55:40 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Mon, 26 Mar 2001 21:55:40 +0200
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: Message by "Tim Peters" <tim.one@home.com> ,
	     Wed, 21 Mar 2001 15:18:54 -0500 , <LNBBLJKPBEHFEDALKOLCMEEOJHAA.tim.one@home.com> 
Message-ID: <20010326195546.238C0EDD21@oratrix.oratrix.nl>

Well, it turns out that disabling fused-add-mul indeed fixes the
problem. The CodeWarrior manual warns that results may be slightly
different with and without fused instructions, but the example they
give is with operations apparently done in higher precision with the
fused instructions. No word about nonstandard behaviour for +0.0 and
-0.0.

As this seems to be a PowerPC issue, not a MacOS issue, it is
something that other PowerPC porters may want to look out for too
(does AIX still exist?).
--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | see http://www.xs4all.nl/~tank/spg-l/sigaction.htm 



From guido at digicool.com  Mon Mar 26 10:14:14 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 03:14:14 -0500
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: Your message of "Mon, 26 Mar 2001 14:49:31 EST."
             <15039.40267.489930.186757@localhost.localdomain> 
References: <15039.40267.489930.186757@localhost.localdomain> 
Message-ID: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>

> I've been unable to reach the bugs tracker today.  Every attempt
> results in a document-contains-no-data error.  Has anyone else had any
> luck?

This is a bizarre SF bug.  When you're browsing patches, clicking on
Bugs will give you this error, and vice versa.

My workaround: go to my personal page, click on a bug listed there,
and make an empty change (i.e. click Submit Changes without making any
changes).  This will present the Bugs browser.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Mon Mar 26 11:46:48 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 04:46:48 -0500
Subject: [Python-Dev] WANTED: chairs for next Python conference
Message-ID: <200103260946.EAA02170@cj20424-a.reston1.va.home.com>

I'm looking for chairs for the next Python conference.  At least the
following positions are still open: BOF chair (new!), Application
track chair, Tools track chair.  (The Apps and Tools tracks are
roughly what the Zope and Apps tracks were this year.)  David Ascher
is program chair, I am conference chair (again).

We're in the early stages of conference organization; Foretec is
looking at having it in a Southern city in the US, towards the end of
February 2002.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From paulp at ActiveState.com  Tue Mar 27 00:06:42 2001
From: paulp at ActiveState.com (Paul Prescod)
Date: Mon, 26 Mar 2001 14:06:42 -0800
Subject: [Python-Dev] Making types behave like classes
References: <3ABE66DB.18389.1CB7239A@localhost>
Message-ID: <3ABFBD72.30F69817@ActiveState.com>

Gordon McMillan wrote:
> 
>..
> 
> There's only one difference that matters: subclassing. I don't
> think there's an incremental path to that that leaves Python
> "easily extended".

All of the differences matter! Inconsistency is a problem in and of
itself.

> But not a __name__.

They really do have __name__s. Try it. type("").__name__

> 
> > They are not (yet) callable but I
> > cannot think of a circumstance in which that would matter.
> 
> Take a look at copy.py.

copy.py only expects the type object to be callable WHEN there is a
getinitargs method. Types won't have this method so it won't use the
class callably. Plus, the whole section only gets run for objects of
type InstanceType.

The important point is that it is not useful to know that __class__ is
callable without knowing the arguments it takes. __class__ is much more
often used as a unique identifier for pointer equality and/or for the
__name__. In looking through the standard library, I can only see places
that the code would improve if __class__ were available for extension
objects.

-- 
Take a recipe. Leave a recipe.  
Python Cookbook!  http://www.activestate.com/pythoncookbook



From tim.one at home.com  Tue Mar 27 00:08:30 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 26 Mar 2001 17:08:30 -0500
Subject: [Python-Dev] test_coercion failing 
In-Reply-To: <20010326195546.238C0EDD21@oratrix.oratrix.nl>
Message-ID: <LNBBLJKPBEHFEDALKOLCKEHPJIAA.tim.one@home.com>

[Jack Jansen]
> Well, it turns out that disabling fused-add-mul indeed fixes the
> problem. The CodeWarrior manual warns that results may be slightly
> different with and without fused instructions, but the example they
> give is with operations apparently done in higher precision with the
> fused instructions. No word about nonstandard behaviour for +0.0 and
> -0.0.
>
> As this seems to be a PowerPC issue, not a MacOS issue, it is
> something that other PowerPC porters may want to look out for too
> (does AIX still exist?).

The PowerPC architecture's fused instructions are wonderful for experts,
because in a*b+c (assuming IEEE doubles w/ 53 bits of precision) they compute
the a*b part to 106 bits of precision internally, and the add of c gets to
see all of them.  This is great if you *know* c is pretty much the negation
of the high-order 53 bits of the product, because it lets you get at the
*lower* 53 bits too; e.g.,

    hipart = a*b;
    lopart = a*b - hipart;  /* assuming fused mul-sub is generated */

gives a pair of doubles (hipart, lopart) whose mathematical (not f.p.) sum
hipart + lopart is exactly equal to the mathematical (not f.p.) product a*b.
In the hands of an expert, this can, e.g., be used to write ultra-fast
high-precision math libraries:  it gives a very cheap way to get the effect
of computing with about twice the native precision.

So that's the kind of thing they're warning you about:  without the fused
mul-sub, "lopart" above is always computed to be exactly 0.0, and so is
useless.  Contrarily, some fp algorithms *depend* on cancelling out oodles of
leading bits in intermediate results, and in the presence of fused mul-add
deliver totally bogus results.

However, screwing up 0's sign bit has nothing to do with any of that, and if
the HW is producing -0 for a fused (+anything)*(+0)-(+0), it can't be called
anything other than a HW bug (assuming it's not in the to-minus-infinity
rounding mode).

When a given compiler generates fused instructions (when available) is a
x-compiler crap-shoot, and the compiler you're using *could* have generated
them before with the same end result.  There's really nothing portable we can
do in the source code to convince a compiler never to generate them.  So
looks like you're stuck with a compiler switch here.

not-the-outcome-i-was-hoping-for-but-i'll-take-it<wink>-ly y'rs  - tim




From tim.one at home.com  Tue Mar 27 00:08:37 2001
From: tim.one at home.com (Tim Peters)
Date: Mon, 26 Mar 2001 17:08:37 -0500
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>

[Jeremy]
> I've been unable to reach the bugs tracker today.  Every attempt
> results in a document-contains-no-data error.  Has anyone else had any
> luck?

[Guido]
> This is a bizarre SF bug.  When you're browsing patches, clicking on
> Bugs will give you this error, and vice versa.
>
> My workaround: go to my personal page, click on a bug listed there,
> and make an empty change (i.e. click Submit Changes without making any
> changes).  This will present the Bugs browser.

Possibly unique to Netscape?  I've never seen this behavior -- although
sometimes I have trouble getting to *patches*, but only when logged in.

clear-the-cache-and-reboot<wink>-ly y'rs  - tim




From moshez at zadka.site.co.il  Tue Mar 27 00:26:44 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Tue, 27 Mar 2001 00:26:44 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
Message-ID: <E14hfRk-00051d-00@darjeeling>

Greetings, earthlings!

As Guido said in the last conference, there is going to be a bugfix release
of Python 2.0, Python 2.0.1. Originally meant to be only a license bugfix
release, comments in the Python community have indicated a need for a real
bugfix release. PEP 6[1] has been written by Aahz, which outlines a procedure
for such releases. With Guido's blessing, I have volunteered to be the
Patch Czar (see the PEP!) for the 2.0.1 release. In this job, I intend
to be feared and hated throughout the Python community -- men will 
tremble to hear the sounds of my footsteps...err...sorry, got sidetracked.

This is the first Python pure bugfix release, and I feel a lot of weight
rests on my shoulders as to whether this experiment is successful. Since
this is the first bugfix release, I intend to be ultra-super-conservative.
I can live with a release that does not fix all the bug, I am very afraid
of a release that breaks a single person's code. Such a thing will give
Python bugfix releases a very bad reputation. So, I am going to be a very
strict Czar.

I will try to follow consistent rules about which patches to integrate,
but I am only human. I will make all my decisions in the public, so they
will be up for review of the community.

There are a few rules I intend to go by

1. No fixes which you have to change your code to enjoy. (E.g., adding a new
   function because the previous API was idiotic)
2. No fixes which have not been applied to the main branch, unless they
   are not relevant to the main branch at all. I much prefer to get a pointer
   to an applied patch or cvs checkin message then a fresh patch. Of course,
   there are cases where this is impossible, so this isn't strict.
3. No fixes which have "stricter checking". Stricter checking is a good
   thing, but not in bug fix releases.
4. No fixes which have a reasonable chance to break someone's code. That
   means that if there's a bug people have a good change of counting on,
   it won't be fix.
5. No "improved documentation/error message" patches. This is stuff that
   gets in people's eyeballs -- I want bugfix upgrade to be as smooth
   as possible.
6. No "internal code was cleaned up". That's a good thing in the development
   branch, but not in bug fix releases.

Note that these rules will *not* be made more lenient, but they might
get stricter, if it seems such strictness is needed in order to make
sure bug fix releases are smooth enough.

However, please remember that this is intended to help you -- the Python
using community. So please, let me know of bugfixes that you need or want
in Python 2.0. I promise that I will consider every request.
Note also, that the Patch Czar is given very few responsibilities ---
all my decisions are subject to Guido's approval. That means that he
gets the final word about each patch.

I intend to post a list of patches I intend to integrate soon -- at the
latest, this Friday, hopefully sooner. I expect to have 2.0.1a1 a week
after that, and further schedule requirements will follow from the
quality of that release. Because it has the dual purpose of also being
a license bugfix release, schedule might be influenced by non-technical
issues. As always, Guido will be the final arbitrator.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From martin at loewis.home.cs.tu-berlin.de  Tue Mar 27 01:00:24 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Tue, 27 Mar 2001 01:00:24 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
Message-ID: <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>

> I have volunteered to be the Patch Czar (see the PEP!) for the 2.0.1
> release

Great!

> So please, let me know of bugfixes that you need or want in Python
> 2.0.

In addition to your procedures (which are all very reasonable), I'd
like to point out that Tim has created a 2.0.1 patch class on the SF
patch manager. I hope you find the time to review the patches in there
(which should not be very difficult at the moment). This is meant for
patches which can't be proposed in terms of 'cvs diff' commands; for
mere copying of code from the mainline, this is probably overkill.

Also note that I have started to give a detailed analysis of what
exactly has changed in the NEWS file of the 2.0 maintainance branch -
I'm curious to know what you think about procedure. If you don't like
it, feel free to undo my changes there.

Regards,
Martin



From guido at digicool.com  Mon Mar 26 13:23:08 2001
From: guido at digicool.com (Guido van Rossum)
Date: Mon, 26 Mar 2001 06:23:08 -0500
Subject: [Python-Dev] Release 2.0.1: Heads Up
In-Reply-To: Your message of "Tue, 27 Mar 2001 01:00:24 +0200."
             <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de> 
References: <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de> 
Message-ID: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>

> > I have volunteered to be the Patch Czar (see the PEP!) for the 2.0.1
> > release
> 
> Great!

Congratulations to Moshe.

> > So please, let me know of bugfixes that you need or want in Python
> > 2.0.
> 
> In addition to your procedures (which are all very reasonable), I'd
> like to point out that Tim has created a 2.0.1 patch class on the SF
> patch manager. I hope you find the time to review the patches in there
> (which should not be very difficult at the moment). This is meant for
> patches which can't be proposed in terms of 'cvs diff' commands; for
> mere copying of code from the mainline, this is probably overkill.
> 
> Also note that I have started to give a detailed analysis of what
> exactly has changed in the NEWS file of the 2.0 maintainance branch -
> I'm curious to know what you think about procedure. If you don't like
> it, feel free to undo my changes there.

Regardless of what Moshe thinks, *I* think that's a great idea.  I
hope that Moshe continues this.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From aahz at panix.com  Tue Mar 27 01:35:55 2001
From: aahz at panix.com (aahz at panix.com)
Date: Mon, 26 Mar 2001 15:35:55 -0800 (PST)
Subject: [Python-Dev] PEP 6 cleanup
Message-ID: <200103262335.SAA22663@panix3.panix.com>

Now that Moshe has agreed to be Patch Czar for 2.0.1, I'd like some
clarification/advice on a couple of issues before I release the next
draft:

Issues To Be Resolved

    What is the equivalent of python-dev for people who are responsible
    for maintaining Python?  (Aahz proposes either python-patch or
    python-maint, hosted at either python.org or xs4all.net.)

    Does SourceForge make it possible to maintain both separate and
    combined bug lists for multiple forks?  If not, how do we mark bugs
    fixed in different forks?  (Simplest is to simply generate a new bug
    for each fork that it gets fixed in, referring back to the main bug
    number for details.)



From moshez at zadka.site.co.il  Tue Mar 27 01:49:33 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Tue, 27 Mar 2001 01:49:33 +0200
Subject: [Python-Dev] Release 2.0.1: Heads Up
In-Reply-To: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>
References: <200103261123.GAA02697@cj20424-a.reston1.va.home.com>, <200103262300.f2QN0O902234@mira.informatik.hu-berlin.de>
Message-ID: <E14hgjt-0005KI-00@darjeeling>

On Mon, 26 Mar 2001 06:23:08 -0500, Guido van Rossum <guido at digicool.com> wrote:

> > Also note that I have started to give a detailed analysis of what
> > exactly has changed in the NEWS file of the 2.0 maintainance branch -
> > I'm curious to know what you think about procedure. If you don't like
> > it, feel free to undo my changes there.
> 
> Regardless of what Moshe thinks, *I* think that's a great idea.  I
> hope that Moshe continues this.

I will, I think this is a good idea too.
I'm still working on a log to detail the patches I intend to backport
(some will take some effort because of several major overhauls I do
*not* intend to backport, like reindentation and string methods)
I already trimmed it down to 200-something patches I'm going to think
of integrating, and I'm now making a second pass over it. 
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From nas at python.ca  Tue Mar 27 06:43:33 2001
From: nas at python.ca (Neil Schemenauer)
Date: Mon, 26 Mar 2001 20:43:33 -0800
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>; from tim.one@home.com on Sun, Mar 25, 2001 at 01:11:58AM -0500
References: <20010324214748.A32161@glacier.fnational.com> <LNBBLJKPBEHFEDALKOLCGEDDJIAA.tim.one@home.com>
Message-ID: <20010326204333.A17390@glacier.fnational.com>

Tim Peters wrote:
> My belief is that generators don't become *truly* pleasant
> unless "yield" ("suspend"; whatever) is made a new statement
> type.

That's fine but how do you create a generator?  I suspose that
using a "yield" statement within a function could make it into a
generator.   Then, calling it would create an instance of a
generator.  Seems a bit too magical to me.

  Neil



From nas at arctrix.com  Tue Mar 27 07:08:24 2001
From: nas at arctrix.com (Neil Schemenauer)
Date: Mon, 26 Mar 2001 21:08:24 -0800
Subject: [Python-Dev] nano-threads?
Message-ID: <20010326210824.B17390@glacier.fnational.com>

Here are some silly bits of code implementing single frame
coroutines and threads using my frame suspend/resume patch.
The coroutine example does not allow a value to be passed but
that would be simple to add.  An updated version of the (very
experimental) patch is here:

    http://arctrix.com/nas/generator3.diff

For me, thinking in terms of frames is quite natural and I didn't
have any trouble writing these examples.  I'm hoping they will be
useful to other people who are trying to get their mind around
continuations.  If your sick of such postings on python-dev flame
me privately and I will stop.  Cheers,

  Neil

#####################################################################
# Single frame threads (nano-threads?).  Output should be:
#
# foo
# bar
# foo
# bar
# bar

import sys

def yield():
    f = sys._getframe(1)
    f.suspend(f)

def run_threads(threads):
    frame = {}
    for t in threads:
        frame[t] = t()
    while threads:
        for t in threads[:]:
            f = frame.get(t)
            if not f:
                threads.remove(t)
            else:
                frame[t] = f.resume()


def foo():
    for x in range(2):
        print "foo"
        yield()

def bar():
    for x in range(3):
        print "bar"
        yield()

def test():
    run_threads([foo, bar])

test()

#####################################################################
# Single frame coroutines.  Should print:
#
# foo
# bar
# baz
# foo
# bar
# baz
# foo
# ...

import sys

def transfer(func):
    f = sys._getframe(1)
    f.suspend((f, func))

def run_coroutines(args):
    funcs = {}
    for f in args:
        funcs[f] = f
    current = args[0]
    while 1:
        rv = funcs[current]()
        if not rv:
            break
        (frame, next) = rv
        funcs[current] = frame.resume
        current = next


def foo():
    while 1:
        print "foo"
        transfer(bar)

def bar():
    while 1:
        print "bar"
        transfer(baz)
        transfer(foo)



From greg at cosc.canterbury.ac.nz  Tue Mar 27 07:48:24 2001
From: greg at cosc.canterbury.ac.nz (Greg Ewing)
Date: Tue, 27 Mar 2001 17:48:24 +1200 (NZST)
Subject: [Python-Dev] Simple generators, round 2
In-Reply-To: <15039.33542.399553.604556@slothrop.digicool.com>
Message-ID: <200103270548.RAA09571@s454.cosc.canterbury.ac.nz>

Jeremy Hylton <jeremy at alum.mit.edu>:

> If "yield" is a keyword, then any function that uses yield is a
> generator.  With this policy, it's straightforward to determine which
> functions are generators at compile time.

But a function which calls a function that contains
a "yield" is a generator, too. Does the compiler need
to know about such functions?

Greg Ewing, Computer Science Dept, +--------------------------------------+
University of Canterbury,	   | A citizen of NewZealandCorp, a	  |
Christchurch, New Zealand	   | wholly-owned subsidiary of USA Inc.  |
greg at cosc.canterbury.ac.nz	   +--------------------------------------+



From jeremy at digicool.com  Tue Mar 27 19:06:20 2001
From: jeremy at digicool.com (Jeremy Hylton)
Date: Tue, 27 Mar 2001 12:06:20 -0500 (EST)
Subject: [Python-Dev] distutils change breaks code, Pyfort
In-Reply-To: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>
References: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>
Message-ID: <15040.51340.820929.133487@localhost.localdomain>

>>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:

  PFD> The requirement of a version argument to the distutils command
  PFD> breaks Pyfort and many of my existing packages. These packages
  PFD> are not intended for use with the distribution commands and a
  PFD> package version number would be meaningless.

  PFD> I will make a new Pyfort that supplies a version number to the
  PFD> call it makes to setup. However, I think this change to
  PFD> distutils is a poor idea. If the version number would be
  PFD> required for the distribution commands, let *them* complain,
  PFD> perhaps by setting a default value of
  PFD> time.asctime(time.gmtime()) or something that the distribution
  PFD> commands could object to.

  PFD> I apologize if I missed an earlier discussion of this change
  PFD> that seems to be in 2.1b2 but not 2.1b1, as I am new to this
  PFD> list.

I haven't read any discussion of distutils changes that was discussed
on this list.  It's a good question, though.  Should distutils be
allowed to change between beta releases in a way that breaks user
code?

There are two possibilities:

1. Guido has decided that distutils release cycles need not be related
   to Python release cycles.  He has said as much for pydoc.  If so,
   the timing of the change is just an unhappy coincidence.

2. Distutils is considered to be part of the standard library and
   should follow the same rules as the rest of the library.  No new
   features after the first beta release, just bug fixes.  And no
   incompatible changes without ample warning.

I think that distutils is mature enough to follow the second set of
rules -- and that the change should be reverted before the final
release.

Jeremy




From gward at python.net  Tue Mar 27 19:09:15 2001
From: gward at python.net (Greg Ward)
Date: Tue, 27 Mar 2001 12:09:15 -0500
Subject: [Python-Dev] setup.py is too aggressive
In-Reply-To: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>; from loewis@informatik.hu-berlin.de on Sat, Mar 24, 2001 at 01:02:53PM +0100
References: <200103241202.NAA19000@pandora.informatik.hu-berlin.de>
Message-ID: <20010327120915.A16082@cthulhu.gerg.ca>

On 24 March 2001, Martin von Loewis said:
> There should be a mechanism to tell setup.py not to build a module at
> all. Since it is looking into Modules/Setup anyway, perhaps a
> 
> *excluded*
> dbm
> 
> syntax in Modules/Setup would be appropriate? Of course, makesetup
> needs to be taught such a syntax. Alternatively, an additional
> configuration file or command line options might work.

FWIW, any new "Setup" syntax would also have to be taught to the
'read_setup_file()' function in distutils.extension.

        Greg
-- 
Greg Ward - nerd                                        gward at python.net
http://starship.python.net/~gward/
We have always been at war with Oceania.



From gward at python.net  Tue Mar 27 19:13:35 2001
From: gward at python.net (Greg Ward)
Date: Tue, 27 Mar 2001 12:13:35 -0500
Subject: [Python-Dev] Re: Alleged deprecation of shutils
In-Reply-To: <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>; from mwh21@cam.ac.uk on Mon, Mar 26, 2001 at 02:18:26PM +0100
References: <200103252153.JAA09102@s454.cosc.canterbury.ac.nz> <200103260203.VAA05048@cj20424-a.reston1.va.home.com> <m3r8zkd5y5.fsf@atrus.jesus.cam.ac.uk>
Message-ID: <20010327121335.B16082@cthulhu.gerg.ca>

On 26 March 2001, Michael Hudson said:
> So, a goal for 2.2: getting moving/copying/deleting of files and
> directories working properly (ie. using native APIs) on all major
> supported platforms, with all the legwork that implies.  We're not
> really very far from this now, are we?  Perhaps (the functionality of)
> shutil.{rmtree,copy,copytree} should move into os and if necessary be
> implemented in nt or dos or mac or whatever.  Any others?

The code already exists, in distutils/file_utils.py.  It's just a
question of giving it a home in the main body of the standard library.

(FWIW, the reasons I didn't patch shutil.py are 1) I didn't want to be
constraint by backward compatibility, and 2) I didn't have a time
machine to go back and change shutil.py in all existing 1.5.2
installations.)

        Greg
-- 
Greg Ward - just another /P(erl|ython)/ hacker          gward at python.net
http://starship.python.net/~gward/
No animals were harmed in transmitting this message.



From guido at digicool.com  Tue Mar 27 07:33:46 2001
From: guido at digicool.com (Guido van Rossum)
Date: Tue, 27 Mar 2001 00:33:46 -0500
Subject: [Python-Dev] distutils change breaks code, Pyfort
In-Reply-To: Your message of "Tue, 27 Mar 2001 12:06:20 EST."
             <15040.51340.820929.133487@localhost.localdomain> 
References: <ADEOIFHFONCLEEPKCACCCEDKCHAA.paul@pfdubois.com>  
            <15040.51340.820929.133487@localhost.localdomain> 
Message-ID: <200103270533.AAA04707@cj20424-a.reston1.va.home.com>

> >>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:
> 
>   PFD> The requirement of a version argument to the distutils command
>   PFD> breaks Pyfort and many of my existing packages. These packages
>   PFD> are not intended for use with the distribution commands and a
>   PFD> package version number would be meaningless.
> 
>   PFD> I will make a new Pyfort that supplies a version number to the
>   PFD> call it makes to setup. However, I think this change to
>   PFD> distutils is a poor idea. If the version number would be
>   PFD> required for the distribution commands, let *them* complain,
>   PFD> perhaps by setting a default value of
>   PFD> time.asctime(time.gmtime()) or something that the distribution
>   PFD> commands could object to.
> 
>   PFD> I apologize if I missed an earlier discussion of this change
>   PFD> that seems to be in 2.1b2 but not 2.1b1, as I am new to this
>   PFD> list.
> 
> I haven't read any discussion of distutils changes that was discussed
> on this list.  It's a good question, though.  Should distutils be
> allowed to change between beta releases in a way that breaks user
> code?
> 
> There are two possibilities:
> 
> 1. Guido has decided that distutils release cycles need not be related
>    to Python release cycles.  He has said as much for pydoc.  If so,
>    the timing of the change is just an unhappy coincidence.
> 
> 2. Distutils is considered to be part of the standard library and
>    should follow the same rules as the rest of the library.  No new
>    features after the first beta release, just bug fixes.  And no
>    incompatible changes without ample warning.
> 
> I think that distutils is mature enough to follow the second set of
> rules -- and that the change should be reverted before the final
> release.
> 
> Jeremy

I agree.  *Allowing* a version argument is fine.  *Requiring* it is
too late in the game.  (And may be a wrong choice anyway, but I'm not
sure of the issues.)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fdrake at acm.org  Wed Mar 28 16:39:42 2001
From: fdrake at acm.org (Fred L. Drake, Jr.)
Date: Wed, 28 Mar 2001 09:39:42 -0500 (EST)
Subject: [Python-Dev] SF bugs tracker?
In-Reply-To: <LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>
References: <200103260814.DAA01098@cj20424-a.reston1.va.home.com>
	<LNBBLJKPBEHFEDALKOLCMEHPJIAA.tim.one@home.com>
Message-ID: <15041.63406.740044.659810@cj42289-a.reston1.va.home.com>

Tim Peters writes:
 > Possibly unique to Netscape?  I've never seen this behavior -- although
 > sometimes I have trouble getting to *patches*, but only when logged in.

  No -- I was getting this with Konqueror as well.  Konqueror is the
KDE 2 browser/file manager.


  -Fred

-- 
Fred L. Drake, Jr.  <fdrake at acm.org>
PythonLabs at Digital Creations




From moshez at zadka.site.co.il  Wed Mar 28 19:02:01 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 19:02:01 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
Message-ID: <E14iJKb-0000Kf-00@darjeeling>

After labouring over the list of log messages for 2-3 days, I finally
have a tentative list of changes. I present it as a list of checkin
messages, complete with the versions. Sometimes I concatenated several
consecutive checkins into one -- "I fixed the bug", "oops, typo last
fix" and similar.

Please go over the list and see if there's anything you feel should
not go.
I'll write a short script that will dump patches files later today,
so I can start applying soon -- so please looking at it and see
I have not made any terrible mistakes.
Thanks in advance

Wholesale: Lib/tempfile.py (modulu __all__)
           Lib/sre.py
           Lib/sre_compile.py
           Lib/sre_constants.py
           Lib/sre_parse.py
           Modules/_sre.c          
----------------------------
Lib/locale.py, 1.15->1.16
setlocale(): In _locale-missing compatibility function, string
comparison should be done with != instead of "is not".
----------------------------
Lib/xml/dom/pulldom.py, 1.20->1.21

When creating an attribute node using createAttribute() or
createAttributeNS(), use the parallel setAttributeNode() or
setAttributeNodeNS() to add the node to the document -- do not assume
that setAttributeNode() will operate properly for both.
----------------------------
Python/pythonrun.c, 2.128->2.129
Fix memory leak with SyntaxError.  (The DECREF was originally hidden
inside a piece of code that was deemed reduntant; the DECREF was
unfortunately *not* redundant!)
----------------------------
Lib/quopri.py, 1.10->1.11
Strip \r as trailing whitespace as part of soft line endings.

Inspired by SF patch #408597 (Walter D?rwald): quopri, soft line
breaks and CRLF.  (I changed (" ", "\t", "\r") into " \t\r".)
----------------------------
Modules/bsddbmodule.c, 1.28->1.29
Don't raise MemoryError in keys() when the database is empty.

This fixes SF bug #410146 (python 2.1b shelve is broken).
----------------------------
Lib/fnmatch.py, 1.10->1.11

Donovan Baarda <abo at users.sourceforge.net>:
Patch to make "\" in a character group work properly.

This closes SF bug #409651.
----------------------------
Objects/complexobject.c, 2.34->2.35
SF bug [ #409448 ] Complex division is braindead
http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=5470&atid=105470
Now less braindead.  Also added test_complex.py, which doesn't test much, but
fails without this patch.
----------------------------
Modules/cPickle.c, 2.54->2.55
SF bug [ #233200 ] cPickle does not use Py_BEGIN_ALLOW_THREADS.
http://sourceforge.net/tracker/?func=detail&aid=233200&group_id=5470&atid=105470
Wrapped the fread/fwrite calls in thread BEGIN_ALLOW/END_ALLOW brackets
Afraid I hit the "delete trailing whitespace key" too!  Only two "real" sections
of code changed here.
----------------------------
Lib/xml/sax/xmlreader.py, 1.13->1.14

Import the exceptions that this module can raise.
----------------------------
Lib/xmllib.py, 1.27->1.28
Moved clearing of "literal" flag.  The flag is set in setliteral which
can be called from a start tag handler.  When the corresponding end
tag is read the flag is cleared.  However, it didn't get cleared when
the start tag was for an empty element of the type <tag .../>.  This
modification fixes the problem.
----------------------------
Modules/pwdmodule.c, 1.24->1.25
Modules/grpmodule.c, 1.14->1.15

Make sure we close the group and password databases when we are done with
them; this closes SF bug #407504.
----------------------------
Python/errors.c, 2.61->2.62
Objects/intobject.c, 2.55->2.56
Modules/timemodule.c, 2.107->2.108
Use Py_CHARMASK for ctype macros. Fixes bug #232787.
----------------------------
Modules/termios.c, 2.17->2.18

Add more protection around the VSWTC/VSWTCH, CRTSCTS, and XTABS symbols;
these can be missing on some (all?) Irix and Tru64 versions.

Protect the CRTSCTS value with a cast; this can be a larger value on
Solaris/SPARC.

This should fix SF tracker items #405092, #405350, and #405355.
----------------------------
Modules/pyexpat.c, 2.42->2.43

Wrap some long lines, use only C89 /* */ comments, and add spaces around
some operators (style guide conformance).
----------------------------
Modules/termios.c, 2.15->2.16

Revised version of Jason Tishler's patch to make this compile on Cygwin,
which does not define all the constants.

This closes SF tracker patch #404924.
----------------------------
Modules/bsddbmodule.c, 1.27->1.28

Gustavo Niemeyer <niemeyer at conectiva.com>:
Fixed recno support (keys are integers rather than strings).
Work around DB bug that cause stdin to be closed by rnopen() when the
DB file needed to exist but did not (no longer segfaults).

This closes SF tracker patch #403445.

Also wrapped some long lines and added whitespace around operators -- FLD.
----------------------------
Lib/urllib.py, 1.117->1.118
Fixing bug #227562 by calling  URLopener.http_error_default when
an invalid 401 request is being handled.
----------------------------
Python/compile.c, 2.170->2.171
Shuffle premature decref; nuke unreachable code block.
Fixes the "debug-build -O test_builtin.py and no test_b2.pyo" crash just
discussed on Python-Dev.
----------------------------
Python/import.c, 2.161->2.162
The code in PyImport_Import() tried to save itself a bit of work and
save the __builtin__ module in a static variable.  But this doesn't
work across Py_Finalise()/Py_Initialize()!  It also doesn't work when
using multiple interpreter states created with PyInterpreterState_New().

So I'm ripping out this small optimization.

This was probably broken since PyImport_Import() was introduced in
1997!  We really need a better test suite for multiple interpreter
states and repeatedly initializing.

This fixes the problems Barry reported in Demo/embed/loop.c.
----------------------------
Modules/unicodedata.c, 2.9->2.11


renamed internal functions to avoid name clashes under OpenVMS
(fixes bug #132815)
----------------------------
Modules/pyexpat.c, 2.40->2.41

Remove the old version of my_StartElementHandler().  This was conditionally
compiled only for some versions of Expat, but was no longer needed as the
new implementation works for all versions.  Keeping it created multiple
definitions for Expat 1.2, which caused compilation to fail.
----------------------------
Lib/urllib.py, 1.116->1.117
provide simple recovery/escape from apparent redirect recursion.  If the
number of entries into http_error_302 exceeds the value set for the maxtries
attribute (which defaults to 10), the recursion is exited by calling
the http_error_500 method (or if that is not defined, http_error_default).
----------------------------
Modules/posixmodule.c, 2.183->2.184

Add a few more missing prototypes to the SunOS 4.1.4 section (no SF
bugreport, just an IRC one by Marion Delgado.) These prototypes are
necessary because the functions are tossed around, not just called.
----------------------------
Modules/mpzmodule.c, 2.35->2.36

Richard Fish <rfish at users.sourceforge.net>:
Fix the .binary() method of mpz objects for 64-bit systems.

[Also removed a lot of trailing whitespace elsewhere in the file. --FLD]

This closes SF patch #103547.
----------------------------
Python/pythonrun.c, 2.121->2.122
Ugly fix for SF bug 131239 (-x flag busted).
Bug was introduced by tricks played to make .pyc files executable
via cmdline arg.  Then again, -x worked via a trick to begin with.
If anyone can think of a portable way to test -x, be my guest!
----------------------------
Makefile.pre.in, 1.15->1.16
Specify directory permissions properly.  Closes SF patch #103717.
----------------------------
install-sh, 2.3->2.4
Update install-sh using version from automake 1.4.  Closes patch #103657
and #103717.
----------------------------
Modules/socketmodule.c, 1.135->1.136
Patch #103636: Allow writing strings containing null bytes to an SSL socket
----------------------------
Modules/mpzmodule.c, 2.34->2.35
Patch #103523, to make mpz module compile with Cygwin
----------------------------
Objects/floatobject.c, 2.78->2.79
SF patch 103543 from tg at freebsd.org:
PyFPE_END_PROTECT() was called on undefined var
----------------------------
Modules/posixmodule.c, 2.181->2.182
Fix Bug #125891 - os.popen2,3 and 4 leaked file objects on Windows.
----------------------------
Python/ceval.c, 2.224->2.225
SF bug #130532:  newest CVS won't build on AIX.
Removed illegal redefinition of REPR macro; kept the one with the
argument name that isn't too easy to confuse with zero <wink>.
----------------------------
Objects/classobject.c, 2.35->2.36
Rename dubiously named local variable 'cmpfunc' -- this is also a
typedef, and at least one compiler choked on this.

(SF patch #103457, by bquinlan)
----------------------------
Modules/_cursesmodule.c, 2.47->2.50
Patch #103485 from Donn Cave: patches to make the module compile on AIX and
    NetBSD
Rename 'lines' variable to 'nlines' to avoid conflict with a macro defined
    in term.h
2001/01/28 18:10:23 akuchling Modules/_cursesmodule.c
Bug #130117: add a prototype required to compile cleanly on IRIX
   (contributed by Paul Jackson)
----------------------------
Lib/statcache.py, 1.9->1.10
SF bug #130306:  statcache.py full of thread problems.
Fixed the thread races.  Function forget_dir was also utterly Unix-specific.
----------------------------
Python/structmember.c, 1.74->1.75
SF bug http://sourceforge.net/bugs/?func=detailbug&bug_id=130242&group_id=5470
SF patch http://sourceforge.net/patch/?func=detailpatch&patch_id=103453&group_id=5470
PyMember_Set of T_CHAR always raises exception.
Unfortunately, this is a use of a C API function that Python itself never makes, so
there's no .py test I can check in to verify this stays fixed.  But the fault in the
code is obvious, and Dave Cole's patch just as obviously fixes it.
----------------------------
Modules/arraymodule.c, 2.61->2.62
Correct one-line typo, reported by yole @ SF, bug 130077.
----------------------------
Python/compile.c, 2.150->2.151
Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
parameters that contained both anonymous tuples and *arg or **arg. Ex:
def f(a, (b, c), *d): pass

Fix the symtable_params() to generate names in the right order for
co_varnames slot of code object.  Consider *arg and **arg before the
"complex" names introduced by anonymous tuples.
----------------------------
Modules/config.c.in, 1.72->1.73
_PyImport_Inittab: define the exceptions module's init function.
Fixes bug #121706.
----------------------------
Python/exceptions.c, 1.19->1.20
[Ed. -- only partial]
Leak pluggin', bug fixin' and better documentin'.  Specifically,

module__doc__: Document the Warning subclass heirarchy.

make_class(): Added a "goto finally" so that if populate_methods()
fails, the return status will be -1 (failure) instead of 0 (success).

fini_exceptions(): When decref'ing the static pointers to the
exception classes, clear out their dictionaries too.  This breaks a
cycle from class->dict->method->class and allows the classes with
unbound methods to be reclaimed.  This plugs a large memory leak in a
common Py_Initialize()/dosomething/Py_Finalize() loop.
----------------------------
Python/pythonrun.c, 2.118->2.119
Lib/atexit.py, 1.3->1.4
Bug #128475: mimetools.encode (sometimes) fails when called from a thread.
pythonrun.c:  In Py_Finalize, don't reset the initialized flag until after
the exit funcs have run.
atexit.py:  in _run_exitfuncs, mutate the list of pending calls in a
threadsafe way.  This wasn't a contributor to bug 128475, it just burned
my eyeballs when looking at that bug.
----------------------------
Modules/ucnhash.c, 1.6->1.7
gethash/cmpname both looked beyond the end of the character name.
This patch makes u"\N{x}" a bit less dependent on pure luck...
----------------------------
Lib/urllib.py, 1.112->1.113
Anonymous SF bug 129288: "The python 2.0 urllib has %%%x as a format
when quoting forbidden characters. There are scripts out there that
break with lower case, therefore I guess %%%X should be used."

I agree, so am fixing this.
----------------------------
Python/bltinmodule.c, 2.191->2.192
Fix for the bug in complex() just reported by Ping.
----------------------------
Modules/socketmodule.c, 1.130->1.131
Use openssl/*.h to include the OpenSSL header files
----------------------------
Lib/distutils/command/install.py, 1.55->1.56
Modified version of a patch from Jeremy Kloth, to make .get_outputs()
produce a list of unique filenames:
    "While attempting to build an RPM using distutils on Python 2.0,
    rpm complained about duplicate files.  The following patch fixed
    that problem.
----------------------------
Objects/unicodeobject.c, 2.72->2.73
Objects/stringobject.c, 2.96->2.97
(partial)
Added checks to prevent PyUnicode_Count() from dumping core
in case the parameters are out of bounds and fixes error handling
for .count(), .startswith() and .endswith() for the case of
mixed string/Unicode objects.

This patch adds Python style index semantics to PyUnicode_Count()
indices (including the special handling of negative indices).

The patch is an extended version of patch #103249 submitted
by Michael Hudson (mwh) on SF. It also includes new test cases.
----------------------------
Modules/posixmodule.c, 2.180->2.181
Plug memory leak.
----------------------------
Python/dynload_mac.c, 2.9->2.11
Use #if TARGET_API_MAC_CARBON to determine carbon/classic macos, not #ifdef.
Added a separate extension (.carbon.slb) for Carbon dynamic modules.
----------------------------
Modules/mmapmodule.c, 2.26->2.27
SF bug 128713:  type(mmap_object) blew up on Linux.
----------------------------
Python/sysmodule.c, 2.81->2.82
stdout is sometimes a macro; use "outf" instead.

Submitted by: Mark Favas <m.favas at per.dem.csiro.au>
----------------------------
Python/ceval.c, 2.215->2.216
Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
#127699.
----------------------------
Modules/mmapmodule.c, 2.24->2.25
Windows mmap should (as the docs probably <wink> say) create a mapping
without a name when the optional tagname arg isn't specified.  Was
actually creating a mapping with an empty string as the name.
----------------------------
Lib/shlex.py, 1.10->1.11
Patch #102953: Fix bug #125452, where shlex.shlex hangs when it
    encounters a string with an unmatched quote, by adding a check for
    EOF in the 'quotes' state.
----------------------------
Modules/binascii.c, 2.27->2.28
Address a bug in the uuencode decoder, reported bu "donut" in SF bug
#127718: '@' and '`' seem to be confused.
----------------------------
Objects/fileobject.c, 2.102->2.103
Tsk, tsk, tsk.  Treat FreeBSD the same as the other BSDs when defining
a fallback for TELL64.  Fixes SF Bug #128119.
----------------------------
Modules/posixmodule.c, 2.179->2.180
Anonymous SF bug report #128053 point out that the #ifdef for
including "tmpfile" in the posix_methods[] array is wrong -- should be
HAVE_TMPFILE, not HAVE_TMPNAM.
----------------------------
Lib/urllib.py, 1.109->1.110
Fixed bug which caused HTTPS not to work at all with string URLs
----------------------------
Objects/floatobject.c, 2.76->2.77
Fix a silly bug in float_pow.  Sorry Tim.
----------------------------
Modules/fpectlmodule.c, 2.12->2.13
Patch #103012: Update fpectlmodule for current glibc;
    The _setfpucw() function/macro doesn't seem to exist any more;
    instead there's an _FPU_SETCW macro.
----------------------------
Objects/dictobject.c, 2.71->2.72
dict_update has two boundary conditions: a.update(a) and a.update({})
Added test for second one.
----------------------------
Objects/listobject.c
fix leak
----------------------------
Lib/getopt.py, 1.11->1.13
getopt used to sort the long option names, in an attempt to simplify
the logic.  That resulted in a bug.  My previous getopt checkin repaired
the bug but left the sorting.  The solution is significantly simpler if
we don't bother sorting at all, so this checkin gets rid of the sort and
the code that relied on it.
Fix for SF bug
https://sourceforge.net/bugs/?func=detailbug&bug_id=126863&group_id=5470
"getopt long option handling broken".  Tossed the excruciating logic in
long_has_args in favor of something obviously correct.
----------------------------
Lib/curses/ascii.py, 1.3->1.4
Make isspace(chr(32)) return true
----------------------------
Lib/distutils/command/install.py, 1.54->1.55
Add forgotten initialization.  Fixes bug #120994, "Traceback with
    DISTUTILS_DEBUG set"
----------------------------
Objects/unicodeobject.c, 2.68->2.69
Fix off-by-one error in split_substring().  Fixes SF bug #122162.
----------------------------
Modules/cPickle.c, 2.53->2.54
Lib/pickle.py, 1.40->1.41
Minimal fix for the complaints about pickling Unicode objects.  (SF
bugs #126161 and 123634).

The solution doesn't use the unicode-escape encoding; that has other
problems (it seems not 100% reversible).  Rather, it transforms the
input Unicode object slightly before encoding it using
raw-unicode-escape, so that the decoding will reconstruct the original
string: backslash and newline characters are translated into their
\uXXXX counterparts.

This is backwards incompatible for strings containing backslashes, but
for some of those strings, the pickling was already broken.

Note that SF bug #123634 complains specifically that cPickle fails to
unpickle the pickle for u'' (the empty Unicode string) correctly.
This was an off-by-one error in load_unicode().

XXX Ugliness: in order to do the modified raw-unicode-escape, I've
cut-and-pasted a copy of PyUnicode_EncodeRawUnicodeEscape() into this
file that also encodes '\\' and '\n'.  It might be nice to migrate
this into the Unicode implementation and give this encoding a new name
('half-raw-unicode-escape'? 'pickle-unicode-escape'?); that would help
pickle.py too.  But right now I can't be bothered with the necessary
infrastructural changes.
----------------------------
Modules/socketmodule.c, 1.129->1.130
Adapted from a patch by Barry Scott, SF patch #102875 and SF bug
#125981: closing sockets was not thread-safe.
----------------------------
Lib/xml/dom/__init__.py, 1.4->1.6

Typo caught by /F -- thanks!
DOMException.__init__():  Remember to pass self to Exception.__init__().
----------------------------
Lib/urllib.py, 1.108->1.09
(partial)
Get rid of string functions, except maketrans() (which is *not*
obsolete!).

Fix a bug in ftpwrapper.retrfile() where somehow ftplib.error_perm was
assumed to be a string.  (The fix applies str().)

Also break some long lines and change the output from test() slightly.
----------------------------
Modules/bsddbmodule.c, 1.25->1.26
[Patch #102827] Fix for PR#119558, avoiding core dumps by checking for
malloc() returning NULL
----------------------------
Lib/site.py, 1.21->1.22
The ".pth" code knew about the layout of Python trees on unix and
windows, but not on the mac. Fixed.
----------------------------
Modules/selectmodule.c, 1.83->1.84
SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.
----------------------------
Modules/parsermodule.c, 2.58->2.59

validate_varargslist():  Fix two bugs in this function, one that affected
                         it when *args and/or **kw are used, and one when
                         they are not.

This closes bug #125375: "parser.tuple2ast() failure on valid parse tree".
----------------------------
Lib/httplib.py, 1.24->1.25
Hoepeful fix for SF bug #123924: Windows - using OpenSSL, problem with
socket in httplib.py.

The bug reports that on Windows, you must pass sock._sock to the
socket.ssl() call.  But on Unix, you must pass sock itself.  (sock is
a wrapper on Windows but not on Unix; the ssl() call wants the real
socket object, not the wrapper.)

So we see if sock has an _sock attribute and if so, extract it.

Unfortunately, the submitter of the bug didn't confirm that this patch
works, so I'll just have to believe it (can't test it myself since I
don't have OpenSSL on Windows set up, and that's a nontrivial thing I
believe).
----------------------------
Python/getargs.c, 2.50->2.51
vgetargskeywords(): Patch for memory leak identified in bug #119862.
----------------------------
Lib/ConfigParser.py, 1.23->1.24

remove_option():  Use the right variable name for the option name!

This closes bug #124324.
----------------------------
Lib/filecmp.py, 1.6->1.7
Call of _cmp had wrong number of paramereters.
Fixed definition of _cmp.
----------------------------
Python/compile.c, 2.143->2.144
Plug a memory leak in com_import_stmt(): the tuple created to hold the
"..." in "from M import ..." was never DECREFed.  Leak reported by
James Slaughter and nailed by Barry, who also provided an earlier
version of this patch.
----------------------------
Objects/stringobject.c, 2.92->2.93
SF patch #102548, fix for bug #121013, by mwh at users.sourceforge.net.

Fixes a typo that caused "".join(u"this is a test") to dump core.
----------------------------
Python/marshal.c, 1.57->1.58
Python/compile.c, 2.142->2.143
SF bug 119622:  compile errors due to redundant atof decls.  I don't understand
the bug report (for details, look at it), but agree there's no need for Python
to declare atof itself:  we #include stdlib.h, and ANSI C sez atof is declared
there already.
----------------------------
Lib/webbrowser.py, 1.4->1.5
Typo for Mac code, fixing SF bug 12195.
----------------------------
Objects/fileobject.c, 2.91->2.92
Added _HAVE_BSDI and __APPLE__ to the list of platforms that require a
hack for TELL64()...  Sounds like there's something else going on
really.  Does anybody have a clue I can buy?
----------------------------
Python/thread_cthread.h, 2.13->2.14
Fix syntax error.  Submitted by Bill Bumgarner.  Apparently this is
still in use, for Apple Mac OSX.
----------------------------
Modules/arraymodule.c, 2.58->2.59
Fix for SF bug 117402, crashes on str(array) and repr(array).  This was an
unfortunate consequence of somebody switching from PyArg_Parse to
PyArg_ParseTuple but without changing the argument from a NULL to a tuple.
----------------------------
Lib/smtplib.py, 1.29->1.30
SMTP.connect(): If the socket.connect() raises a socket.error, be sure
to call self.close() to reclaim some file descriptors, the reraise the
exception.  Closes SF patch #102185 and SF bug #119833.
----------------------------
Objects/rangeobject.c, 2.20->2.22

Fixed support for containment test when a negative step is used; this
*really* closes bug #121965.

Added three attributes to the xrange object: start, stop, and step.  These
are the same as for the slice objects.

In the containment test, get the boundary condition right.  ">" was used
where ">=" should have been.

This closes bug #121965.
----------------------------
configure.in, 1.177->1.178
Fix for SF bug #117606:
  - when compiling with GCC on Solaris, use "$(CC) -shared" instead
    of "$(CC) -G" to generate .so files
  - when compiling with GCC on any platform, add "-fPIC" to OPT
    (without this, "$(CC) -shared" dies horribly)
----------------------------
configure.in, 1.175->1.176

Make sure the Modules/ directory is created before writing Modules/Setup.
----------------------------
Modules/_cursesmodule.c, 2.39->2.40
Patch from Randall Hopper to fix PR #116172, "curses module fails to
build on SGI":
* Check for 'sgi' preprocessor symbol, not '__sgi__'
* Surround individual character macros with #ifdef's, instead of making them
  all rely on STRICT_SYSV_CURSES
----------------------------
Modules/_tkinter.c, 1.114->1.115
Do not release unallocated Tcl objects. Closes #117278 and  #117167.
----------------------------
Python/dynload_shlib.c, 2.6->2.7
Patch 102114, Bug 11725.  On OpenBSD (but apparently not on the other
BSDs) you need a leading underscore in the dlsym() lookup name.
----------------------------
Lib/UserString.py, 1.6->1.7
Fix two typos in __imul__.  Closes Bug #117745.
----------------------------
Lib/mailbox.py, 1.25->1.26

Maildir.__init__():  Make sure self.boxes is set.

This closes SourceForge bug #117490.
----------------------------

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From tim.one at home.com  Wed Mar 28 19:51:27 2001
From: tim.one at home.com (Tim Peters)
Date: Wed, 28 Mar 2001 12:51:27 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>

Whew!  What a thankless job, Moshe -- thank you!  Comments on a few:

> Objects/complexobject.c, 2.34->2.35
> SF bug [ #409448 ] Complex division is braindead
> http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=547
> 0&atid=105470

As we've seen, that caused a std test to fail on Mac Classic, due to an
accident of fused f.p. code generation and what sure looks like a PowerPC HW
bug.  It can also change numeric results slightly due to different order of
f.p. operations on any platform.  So this would not be a "pure bugfix" in
Aahz's view, despite that it's there purely to fix bugs <wink>.

> Modules/selectmodule.c, 1.83->1.84
> SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.

I'm afraid that boosting implementation limits has to be considered "a
feature".

> Objects/rangeobject.c, 2.20->2.22
>
> Fixed support for containment test when a negative step is used; this
> *really* closes bug #121965.
>
> Added three attributes to the xrange object: start, stop, and step.
> These are the same as for the slice objects.
>
> In the containment test, get the boundary condition right.  ">" was used
> where ">=" should have been.
>
> This closes bug #121965.

This one Aahz singled out previously as a canonical example of a patch he
would *not* include, because adding new attributes seemed potentially
disruptive to him (but why?  maybe someone was depending on the precise value
of len(dir(xrange(42)))?).




From aahz at panix.com  Wed Mar 28 19:57:49 2001
From: aahz at panix.com (aahz at panix.com)
Date: Wed, 28 Mar 2001 09:57:49 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com> from "Tim Peters" at Mar 28, 2001 12:51:27 PM
Message-ID: <200103281757.MAA04464@panix3.panix.com>

Tim:
> Moshe:
>>
>> Fixed support for containment test when a negative step is used; this
>> *really* closes bug #121965.
>>
>> Added three attributes to the xrange object: start, stop, and step.
>> These are the same as for the slice objects.
>>
>> In the containment test, get the boundary condition right.  ">" was used
>> where ">=" should have been.
>>
>> This closes bug #121965.
> 
> This one Aahz singled out previously as a canonical example of a
> patch he would *not* include, because adding new attributes seemed
> potentially disruptive to him (but why? maybe someone was depending on
> the precise value of len(dir(xrange(42)))?).

I'm not sure about this, but it seems to me that the attribute change
will generate a different .pyc.  If I'm wrong about that, this patch
as-is is fine with me; otherwise, I'd lobby to use the containment fix
but not the attributes (assuming we're willing to use part of a patch).


From mwh21 at cam.ac.uk  Wed Mar 28 20:18:28 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 28 Mar 2001 19:18:28 +0100
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Moshe Zadka's message of "Wed, 28 Mar 2001 19:02:01 +0200"
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>

Moshe Zadka <moshez at zadka.site.co.il> writes:

> After labouring over the list of log messages for 2-3 days, I finally
> have a tentative list of changes. I present it as a list of checkin
> messages, complete with the versions. Sometimes I concatenated several
> consecutive checkins into one -- "I fixed the bug", "oops, typo last
> fix" and similar.
> 
> Please go over the list and see if there's anything you feel should
> not go.

I think there are some that don't apply to 2.0.1:

> Python/pythonrun.c, 2.128->2.129
> Fix memory leak with SyntaxError.  (The DECREF was originally hidden
> inside a piece of code that was deemed reduntant; the DECREF was
> unfortunately *not* redundant!)

and

> Python/compile.c, 2.150->2.151
> Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
> parameters that contained both anonymous tuples and *arg or **arg. Ex:
> def f(a, (b, c), *d): pass
> 
> Fix the symtable_params() to generate names in the right order for
> co_varnames slot of code object.  Consider *arg and **arg before the
> "complex" names introduced by anonymous tuples.

aren't meaningful without the nested scopes stuff.  But I guess you'll
notice pretty quickly if I'm right...

Otherwise, general encouragement!  Please keep it up.

Cheers,
M.

-- 
  languages shape the way we think, or don't.
                                        -- Erik Naggum, comp.lang.lisp




From jeremy at alum.mit.edu  Wed Mar 28 19:07:10 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:10 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6718.542630.936641@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/ceval.c, 2.224->2.225
> SF bug #130532:  newest CVS won't build on AIX.
> Removed illegal redefinition of REPR macro; kept the one with the
> argument name that isn't too easy to confuse with zero <wink>.

The REPR macro was not present in 2.0 and is no longer present in 2.1.

Jeremy



From guido at digicool.com  Wed Mar 28 20:21:18 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 13:21:18 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 09:57:49 PST."
             <200103281757.MAA04464@panix3.panix.com> 
References: <200103281757.MAA04464@panix3.panix.com> 
Message-ID: <200103281821.NAA10019@cj20424-a.reston1.va.home.com>

> > This one Aahz singled out previously as a canonical example of a
> > patch he would *not* include, because adding new attributes seemed
> > potentially disruptive to him (but why? maybe someone was depending on
> > the precise value of len(dir(xrange(42)))?).
> 
> I'm not sure about this, but it seems to me that the attribute change
> will generate a different .pyc.  If I'm wrong about that, this patch
> as-is is fine with me; otherwise, I'd lobby to use the containment fix
> but not the attributes (assuming we're willing to use part of a patch).

Adding attributes to xrange() can't possibly change the .pyc files.

> >From my POV, it's *real* important that .pyc files be portable between
> bugfix releases, and so far I haven't seen any argument against that
> goal.

Agreed with the goal, of course.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jeremy at alum.mit.edu  Wed Mar 28 19:07:03 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:03 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6711.20698.535298@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/compile.c, 2.150->2.151
> Fix bug reported by Ka-Ping Yee: The compiler botched parsing function
> parameters that contained both anonymous tuples and *arg or **arg. Ex:
> def f(a, (b, c), *d): pass
>
> Fix the symtable_params() to generate names in the right order for
> co_varnames slot of code object.  Consider *arg and **arg before the
> "complex" names introduced by anonymous tuples.

I believe this bug report was only relevant for the compiler w/
symbol table pass introduced in Python 2.1.

Jeremy



From jeremy at alum.mit.edu  Wed Mar 28 19:07:22 2001
From: jeremy at alum.mit.edu (Jeremy Hylton)
Date: Wed, 28 Mar 2001 12:07:22 -0500 (EST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <E14iJKb-0000Kf-00@darjeeling>
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>

> Python/ceval.c, 2.215->2.216
> Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
> #127699.

fast_cfunction was not present in Python 2.0.  The CALL_FUNCTION
implementation in ceval.c was rewritten for Python 2.1.

Jeremy




From moshez at zadka.site.co.il  Wed Mar 28 20:22:27 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:22:27 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>
References: <LNBBLJKPBEHFEDALKOLCOENKJIAA.tim.one@home.com>
Message-ID: <E14iKaR-0000d5-00@darjeeling>

On Wed, 28 Mar 2001 12:51:27 -0500, "Tim Peters" <tim.one at home.com> wrote:

> Whew!  What a thankless job, Moshe -- thank you!

I just wanted to keep this in to illustrate the ironical nature of the
universe ;-)

>  Comments on a few:
> 
> > Objects/complexobject.c, 2.34->2.35
> > SF bug [ #409448 ] Complex division is braindead
> > http://sourceforge.net/tracker/?func=detail&aid=409448&group_id=547
> > 0&atid=105470
> 
> As we've seen, that caused a std test to fail on Mac Classic

OK, it's dead.

> > Modules/selectmodule.c, 1.83->1.84
> > SF bug 110843:  Low FD_SETSIZE limit on Win32 (PR#41).  Boosted to 512.
> 
> I'm afraid that boosting implementation limits has to be considered "a
> feature".

You're right. Killed.

> > Objects/rangeobject.c, 2.20->2.22
> >
> > Fixed support for containment test when a negative step is used; this
> > *really* closes bug #121965.
> >
> > Added three attributes to the xrange object: start, stop, and step.
> > These are the same as for the slice objects.
> >
> > In the containment test, get the boundary condition right.  ">" was used
> > where ">=" should have been.
> >
> > This closes bug #121965.
> 
> This one Aahz singled out previously as a canonical example of a patch he
> would *not* include, because adding new attributes seemed potentially
> disruptive to him (but why?  maybe someone was depending on the precise value
> of len(dir(xrange(42)))?).

You're right, I forgot to (partial) this.
(partial)'s mean, BTW, that only part of the patch goes.
I do want to fix the containment, and it's in the same version upgrade.
More work for me! Yay!

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Wed Mar 28 20:25:21 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:25:21 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>
References: <15042.6730.575137.298460@w221.z064000254.bwi-md.dsl.cnc.net>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iKdF-0000eg-00@darjeeling>

On Wed, 28 Mar 2001, Jeremy Hylton <jeremy at alum.mit.edu> wrote:

> > Python/ceval.c, 2.215->2.216
> > Add missing Py_DECREF in fast_cfunction.  Partial fix for SF bug
> > #127699.
> 
> fast_cfunction was not present in Python 2.0.  The CALL_FUNCTION
> implementation in ceval.c was rewritten for Python 2.1.

Thanks, dropped. Ditto for the REPR and the *arg parsing.

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Wed Mar 28 20:30:31 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 20:30:31 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <200103281757.MAA04464@panix3.panix.com>
References: <200103281757.MAA04464@panix3.panix.com>
Message-ID: <E14iKiF-0000fW-00@darjeeling>

On Wed, 28 Mar 2001 09:57:49 -0800 (PST), <aahz at panix.com> wrote:
 
> From my POV, it's *real* important that .pyc files be portable between
> bugfix releases, and so far I haven't seen any argument against that
> goal.

It is a release-critical goal, yes.
It's not an argument against adding attributes to range objects.
However, adding attributes to range objects is a no-go, and it got in by
mistake.

The list should be, of course, treated as a first rough draft. I'll post a 
more complete list to p-d and p-l after it's hammered out a bit. Since
everyone who checkin stuff is on this mailing list, I wanted people
to review their own checkins first, to see I'm not making complete blunders.

Thanks a lot to Tim, Jeremy and /F for their feedback, by the way.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From aahz at panix.com  Wed Mar 28 21:06:15 2001
From: aahz at panix.com (aahz at panix.com)
Date: Wed, 28 Mar 2001 11:06:15 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <no.id> from "Guido van Rossum" at Mar 28, 2001 01:21:18 PM
Message-ID: <200103281906.OAA10976@panix6.panix.com>

Guido:
>Aahz:
>>
>> I'm not sure about this, but it seems to me that the attribute change
>> will generate a different .pyc.  If I'm wrong about that, this patch
>> as-is is fine with me; otherwise, I'd lobby to use the containment fix
>> but not the attributes (assuming we're willing to use part of a patch).
> 
> Adding attributes to xrange() can't possibly change the .pyc files.

Okay, chalk another one up to ignorance.  Another thought occurred to me
in the shower, though: would this change the pickle of xrange()?  If yes,
should pickle changes also be prohibited in bugfix releases (in the PEP)?
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Boost the stock market -- fire someone"



From guido at digicool.com  Wed Mar 28 21:12:59 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 14:12:59 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 19:02:01 +0200."
             <E14iJKb-0000Kf-00@darjeeling> 
References: <E14iJKb-0000Kf-00@darjeeling> 
Message-ID: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>

> After labouring over the list of log messages for 2-3 days, I finally
> have a tentative list of changes. I present it as a list of checkin
> messages, complete with the versions. Sometimes I concatenated several
> consecutive checkins into one -- "I fixed the bug", "oops, typo last
> fix" and similar.

Good job, Moshe!  The few where I had doubts have already been covered
by others.  As the saying goes, "check it in" :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at effbot.org  Wed Mar 28 21:21:46 2001
From: fredrik at effbot.org (Fredrik Lundh)
Date: Wed, 28 Mar 2001 21:21:46 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
References: <200103281906.OAA10976@panix6.panix.com>
Message-ID: <018601c0b7bc$55d08f00$e46940d5@hagrid>

> Okay, chalk another one up to ignorance.  Another thought occurred to me
> in the shower, though: would this change the pickle of xrange()?  If yes,
> should pickle changes also be prohibited in bugfix releases (in the PEP)?

from the why-dont-you-just-try-it department:

Python 2.0 (#8, Jan 29 2001, 22:28:01) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import pickle
>>> data = xrange(10)
>>> dir(data)
['tolist']
>>> pickle.dumps(data)
Traceback (most recent call last):
...
pickle.PicklingError: can't pickle 'xrange' object: xrange(10)

Python 2.1b2 (#12, Mar 22 2001, 15:15:01) [MSC 32 bit (Intel)] on win32
Type "copyright", "credits" or "license" for more information.
>>> import pickle
>>> data = xrange(10)
>>> dir(data)
['start', 'step', 'stop', 'tolist']
>>> pickle.dumps(data)
Traceback (most recent call last):
...
pickle.PicklingError: can't pickle 'xrange' object: xrange(10)

Cheers /F




From aahz at panix.com  Wed Mar 28 21:17:59 2001
From: aahz at panix.com (aahz at panix.com)
Date: Wed, 28 Mar 2001 11:17:59 -0800 (PST)
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <no.id> from "Fredrik Lundh" at Mar 28, 2001 09:21:46 PM
Message-ID: <200103281917.OAA12358@panix6.panix.com>

> > Okay, chalk another one up to ignorance.  Another thought occurred to me
> > in the shower, though: would this change the pickle of xrange()?  If yes,
> > should pickle changes also be prohibited in bugfix releases (in the PEP)?
> 
> from the why-dont-you-just-try-it department:

You're right, I should have tried it.  I didn't because my shell account
still hasn't set up Python 2.0 as the default version and I haven't yet
set myself up to test beta/patch/CVS releases.  <sigh>  The more I
learn, the more ignorant I feel....
-- 
                      --- Aahz  <*>  (Copyright 2001 by aahz at pobox.com)

Androgynous poly kinky vanilla queer het Pythonista   http://www.rahul.net/aahz/
Hugs and backrubs -- I break Rule 6

"Boost the stock market -- fire someone"



From guido at digicool.com  Wed Mar 28 21:18:26 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 14:18:26 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 11:06:15 PST."
             <200103281906.OAA10976@panix6.panix.com> 
References: <200103281906.OAA10976@panix6.panix.com> 
Message-ID: <200103281918.OAA10296@cj20424-a.reston1.va.home.com>

> > Adding attributes to xrange() can't possibly change the .pyc files.
> 
> Okay, chalk another one up to ignorance.  Another thought occurred to me
> in the shower, though: would this change the pickle of xrange()?  If yes,
> should pickle changes also be prohibited in bugfix releases (in the PEP)?

I agree that pickle changes should be prohibited, although I want to
make an exception for the fix to pickling of Unicode objects (which is
pretty broken in 2.0).

That said, xrange() objects can't be pickled, so it's a non-issue. :-)

--Guido van Rossum (home page: http://www.python.org/~guido/)



From jack at oratrix.nl  Wed Mar 28 21:59:26 2001
From: jack at oratrix.nl (Jack Jansen)
Date: Wed, 28 Mar 2001 21:59:26 +0200 (MET DST)
Subject: [Python-Dev] MacPython 2.1b2 available
Message-ID: <20010328195926.47261EA11F@oratrix.oratrix.nl>

MacPython 2.1b2 is available for download. Get it via
http://www.cwi.nl/~jack/macpython.html .

New in this version:
- A choice of Carbon or Classic runtime, so runs on anything between
  MacOS 8.1 and MacOS X
- Distutils support for easy installation of extension packages
- BBedit language plugin
- All the platform-independent Python 2.1 mods
- New version of Numeric
- Lots of bug fixes
- Choice of normal and active installer

Please send feedback on this release to pythonmac-sig at python.org,
where all the maccies hang out.

Enjoy,


--
Jack Jansen             | ++++ stop the execution of Mumia Abu-Jamal ++++
Jack.Jansen at oratrix.com | ++++ if you agree copy these lines to your sig ++++
www.oratrix.nl/~jack    | ++++ see http://www.xs4all.nl/~tank/ ++++



From moshez at zadka.site.co.il  Wed Mar 28 21:58:23 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 21:58:23 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>
References: <m3itktdaff.fsf@atrus.jesus.cam.ac.uk>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iM5H-0000rB-00@darjeeling>

On 28 Mar 2001 19:18:28 +0100, Michael Hudson <mwh21 at cam.ac.uk> wrote:
 
> I think there are some that don't apply to 2.0.1:
> 
> > Python/pythonrun.c, 2.128->2.129
> > Fix memory leak with SyntaxError.  (The DECREF was originally hidden
> > inside a piece of code that was deemed reduntant; the DECREF was
> > unfortunately *not* redundant!)

OK, dead.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From moshez at zadka.site.co.il  Wed Mar 28 22:05:38 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 22:05:38 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>
References: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>, <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <E14iMCI-0000s2-00@darjeeling>

On Wed, 28 Mar 2001 14:12:59 -0500, Guido van Rossum <guido at digicool.com> wrote:
 
> The few where I had doubts have already been covered
> by others.  As the saying goes, "check it in" :-)

I'm afraid it will still take time to generate the patches, apply
them, test them, etc....
I was hoping to create a list of patches tonight, but I'm a bit too
dead. I'll post to p-l tommorow with the new list of patches.

PS.
Tools/script/logmerge.py loses version numbers. That pretty much
sucks for doing the work I did, even though the raw log was worse --
I ended up cross referencing and finding version numbers by hand.
If anyone doesn't have anything better to do, here's a nice gift
for 2.1 ;-)

PPS.
Most of the work I can do myself just fine. There are a couple of places
where I could *really* need some help. One of those is testing fixes
for bugs which manifest on exotic OSes (and as far as I'm concerned, 
Windows is as exotic as they come <95 wink>.) Please let me know if
you're interested in testing patches for them.
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Wed Mar 28 22:19:19 2001
From: guido at digicool.com (Guido van Rossum)
Date: Wed, 28 Mar 2001 15:19:19 -0500
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
In-Reply-To: Your message of "Wed, 28 Mar 2001 22:05:38 +0200."
             <E14iMCI-0000s2-00@darjeeling> 
References: <200103281912.OAA10183@cj20424-a.reston1.va.home.com>, <E14iJKb-0000Kf-00@darjeeling>  
            <E14iMCI-0000s2-00@darjeeling> 
Message-ID: <200103282019.PAA10717@cj20424-a.reston1.va.home.com>

> > The few where I had doubts have already been covered
> > by others.  As the saying goes, "check it in" :-)
> 
> I'm afraid it will still take time to generate the patches, apply
> them, test them, etc....

Understood!  There's no immediate hurry (except for the fear that you
might be distracted by real work :-).

> I was hoping to create a list of patches tonight, but I'm a bit too
> dead. I'll post to p-l tommorow with the new list of patches.

You're doing great.  Take some rest.

> PS.
> Tools/script/logmerge.py loses version numbers. That pretty much
> sucks for doing the work I did, even though the raw log was worse --
> I ended up cross referencing and finding version numbers by hand.
> If anyone doesn't have anything better to do, here's a nice gift
> for 2.1 ;-)

Yes, it sucks.  Feel free to check in a change into the 2.1 tree!

> PPS.
> Most of the work I can do myself just fine. There are a couple of places
> where I could *really* need some help. One of those is testing fixes
> for bugs which manifest on exotic OSes (and as far as I'm concerned, 
> Windows is as exotic as they come <95 wink>.) Please let me know if
> you're interested in testing patches for them.

PL will volunteer Win98se and Win2000 testing.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Wed Mar 28 22:25:19 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 28 Mar 2001 22:25:19 +0200
Subject: [Python-Dev] List of Patches to Go in 2.0.1
Message-ID: <200103282025.f2SKPJj04355@mira.informatik.hu-berlin.de>

> This one Aahz singled out previously as a canonical example of a patch he
> would *not* include, because adding new attributes seemed potentially
> disruptive to him (but why?  maybe someone was depending on the precise value
> of len(dir(xrange(42)))?).

There is a patch on SF which backports that change without introducing
these attributes in the 2.0.1 class.

Regards,
Martin




From martin at loewis.home.cs.tu-berlin.de  Wed Mar 28 22:39:20 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Wed, 28 Mar 2001 22:39:20 +0200
Subject: [Python-Dev] List of Patches to Go in 2.0.1
Message-ID: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>

> Modules/_tkinter.c, 1.114->1.115
> Do not release unallocated Tcl objects. Closes #117278 and  #117167.

That is already committed to the maintenance branch.

> Modules/pyexpat.c, 2.42->2.43

There is a number of memory leaks which I think should get fixed,
inside the changes:

2.33->2.34
2.31->2.32 (garbage collection, and missing free calls)

I can produce a patch that only has those changes.

Martin



From michel at digicool.com  Wed Mar 28 23:00:57 2001
From: michel at digicool.com (Michel Pelletier)
Date: Wed, 28 Mar 2001 13:00:57 -0800 (PST)
Subject: [Python-Dev] Updated, shorter PEP 245
Message-ID: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>

Hi folks,

I have broken PEP 245 into two different PEPs, the first, which is now PEP
245, covers only the syntax and the changes to the Python language.  It is
much shorter and sweeter that the old one.

The second one, yet to have a number or to be totally polished off,
describes my proposed interface *model* based on the Zope interfaces work
and the previous incarnation of PEP 245.  This next PEP is totally
independent of PEP 245, and can be accepted or rejected independent of the
syntax if a different model is desired.

In fact, Amos Latteier has proposed to me a different, simpler, though
less functional model that would make an excellent alternative.  I'll
encourage him to formalize it.  Or would it be acceptable to offer two
possible models in the same PEP?

Finally, I forsee a third PEP to cover issues beyond the model, like type
checking, interface enforcement, and formalizing well-known python
"protocols" as interfaces.  That's a work for later consideration, that is
also independent of the previous two PEPs.

The *new* PEP 245 can be found at the following link:

http://www.zope.org/Members/michel/MyWiki/InterfacesPEP/PEP245.txt

Enjoy, and please feel free to comment.

-Michel





From michel at digicool.com  Wed Mar 28 23:12:09 2001
From: michel at digicool.com (Michel Pelletier)
Date: Wed, 28 Mar 2001 13:12:09 -0800 (PST)
Subject: [Python-Dev] Updated, shorter PEP 245
In-Reply-To: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>
Message-ID: <Pine.LNX.4.32.0103281311420.3864-100000@localhost.localdomain>


On Wed, 28 Mar 2001, Michel Pelletier wrote:

> The *new* PEP 245 can be found at the following link:
>
> http://www.zope.org/Members/michel/MyWiki/InterfacesPEP/PEP245.txt

It's also available in a formatted version at the python dev site:

http://python.sourceforge.net/peps/pep-0245.html

-Michel




From moshez at zadka.site.co.il  Wed Mar 28 23:10:14 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Wed, 28 Mar 2001 23:10:14 +0200
Subject: [Python-Dev] Re: List of Patches to Go in 2.0.1
In-Reply-To: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>
References: <200103282039.f2SKdKB04694@mira.informatik.hu-berlin.de>
Message-ID: <E14iNCo-00014t-00@darjeeling>

On Wed, 28 Mar 2001, "Martin v. Loewis" <martin at loewis.home.cs.tu-berlin.de> wrote:

> > Modules/_tkinter.c, 1.114->1.115
> > Do not release unallocated Tcl objects. Closes #117278 and  #117167.
> 
> That is already committed to the maintenance branch.

Thanks, deleted.

> > Modules/pyexpat.c, 2.42->2.43
> 
> There is a number of memory leaks which I think should get fixed,
> inside the changes:
> 
> 2.33->2.34
> 2.31->2.32 (garbage collection, and missing free calls)
> 
> I can produce a patch that only has those changes.

Yes, that would be very helpful. 
Please assign it to me if you post it at SF.
The problem I had with the XML code (which had a couple of other fixed
bugs) was that it was always "resynced with PyXML tree", which seemed
to me too large to be safe...
-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From barry at digicool.com  Wed Mar 28 23:14:42 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Wed, 28 Mar 2001 16:14:42 -0500
Subject: [Python-Dev] Updated, shorter PEP 245
References: <Pine.LNX.4.32.0103281241530.3864-100000@localhost.localdomain>
Message-ID: <15042.21570.617105.910629@anthem.wooz.org>

>>>>> "MP" == Michel Pelletier <michel at digicool.com> writes:

    MP> In fact, Amos Latteier has proposed to me a different,
    MP> simpler, though less functional model that would make an
    MP> excellent alternative.  I'll encourage him to formalize it.
    MP> Or would it be acceptable to offer two possible models in the
    MP> same PEP?

It would probably be better to have them as two separate (competing)
PEPs.

-Barry



From mwh21 at cam.ac.uk  Thu Mar 29 00:55:36 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: 28 Mar 2001 23:55:36 +0100
Subject: [Python-Dev] test_doctest failing, but perhaps by accident
In-Reply-To: "Tim Peters"'s message of "Wed, 21 Mar 2001 17:30:52 -0500"
References: <LNBBLJKPBEHFEDALKOLCEEFJJHAA.tim.one@home.com>
Message-ID: <m3g0fxcxlj.fsf@atrus.jesus.cam.ac.uk>

"Tim Peters" <tim.one at home.com> writes:

> I'm calling this one a bug in doctest.py, and will fix it there.  Ugly:
> since we can longer rely on list.sort() not raising exceptions, it won't be
> enough to replace the existing
> 
>     for k, v in dict.items():
> 
> with
> 
>     items = dict.items()
>     items.sort()
>     for k, v in items:

Hmm, reading through these posts for summary purposes, it occurs to me
that this *is* safe, 'cause item 0 of the tuples will always be
distinct strings, and as equal-length tuples are compared
lexicographically, the values will never actually be compared!

pointless-ly y'rs
M.

-- 
93. When someone says "I want a programming language in which I
    need only say what I wish done," give him a lollipop.
  -- Alan Perlis, http://www.cs.yale.edu/homes/perlis-alan/quotes.html




From mwh21 at cam.ac.uk  Thu Mar 29 14:06:00 2001
From: mwh21 at cam.ac.uk (Michael Hudson)
Date: Thu, 29 Mar 2001 13:06:00 +0100 (BST)
Subject: [Python-Dev] python-dev summary, 2001-03-15 - 2001-03-29
Message-ID: <Pine.LNX.4.10.10103291304110.866-100000@localhost.localdomain>

 This is a summary of traffic on the python-dev mailing list between
 Mar 15 and Mar 28 (inclusive) 2001.  It is intended to inform the
 wider Python community of ongoing developments.  To comment, just
 post to python-list at python.org or comp.lang.python in the usual
 way. Give your posting a meaningful subject line, and if it's about a
 PEP, include the PEP number (e.g. Subject: PEP 201 - Lockstep
 iteration) All python-dev members are interested in seeing ideas
 discussed by the community, so don't hesitate to take a stance on a
 PEP if you have an opinion.

 This is the fourth summary written by Michael Hudson.
 Summaries are archived at:

  <http://starship.python.net/crew/mwh/summaries/>

   Posting distribution (with apologies to mbm)

   Number of articles in summary: 410

    50 |                 [|]                                    
       |                 [|]                                    
       |                 [|]                                    
       |                 [|]                                    
    40 |                 [|]                                    
       |                 [|] [|]                                
       | [|]             [|] [|]                                
       | [|]             [|] [|] [|]     [|]                    
    30 | [|]             [|] [|] [|]     [|]                    
       | [|]             [|] [|] [|]     [|]                    
       | [|]             [|] [|] [|]     [|] [|]                
       | [|]         [|] [|] [|] [|]     [|] [|]             [|]
    20 | [|] [|]     [|] [|] [|] [|]     [|] [|]             [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]             [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|]     [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
    10 | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]     [|]     [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]
       | [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|] [|]
     0 +-044-024-013-029-059-046-040-022-040-031-007-019-008-028
        Thu 15| Sat 17| Mon 19| Wed 21| Fri 23| Sun 25| Tue 27|
            Fri 16  Sun 18  Tue 20  Thu 22  Sat 24  Mon 26  Wed 28

 Bug-fixing for 2.1 remained a priority for python-dev this fortnight
 which saw the release of 2.1b2 last Friday.


    * Python 2.0.1 *

 Aahz posted his first draft of PEP 6, outlining the process by which
 maintenance releases of Python should be made.

  <http://python.sourceforge.net/peps/pep-0006.html>

 Moshe Zadka has volunteered to be the "Patch Czar" for Python 2.0.1.

  <http://mail.python.org/pipermail/python-dev/2001-March/013952.html>

 I'm sure we can all join in the thanks due to Moshe for taking up
 this tedious but valuable job!


    * Simple Generator implementations *

 Neil Schemenauer posted links to a couple of "simple" implementations
 of generators (a.k.a. resumable functions) that do not depend on the
 stackless changes going in.

  <http://mail.python.org/pipermail/python-dev/2001-March/013648.html>
  <http://mail.python.org/pipermail/python-dev/2001-March/013666.html>

 These implementations have the advantage that they might be
 applicable to Jython, something that sadly cannot be said of
 stackless.
 

    * portable file-system stuff *

 The longest thread of the summary period started off with a request
 for a portable way to find out free disk space:

  <http://mail.python.org/pipermail/python-dev/2001-March/013706.html>

 After a slightly acrimonious debate about the nature of Python
 development, /F produced a patch that implements partial support for
 os.statvfs on Windows:

  <http://sourceforge.net/tracker/index.php?func=detail&aid=410547&group_id=5470&atid=305470>

 which can be used to extract such information.

 A side-product of this discussion was the observation that although
 Python has a module that does some file manipulation, shutil, it is
 far from being as portable as it might be - in particular it fails
 miserably on the Mac where it ignores resource forks.  Greg Ward then
 pointed out that he had to implement cross-platform file copying for
 the distutils

  <http://mail.python.org/pipermail/python-dev/2001-March/013962.html>

 so perhaps all that needs to be done is for this stuff to be moved
 into the core.  It seems very unlikely there will be much movement
 here before 2.2.




From fdrake at cj42289-a.reston1.va.home.com  Thu Mar 29 15:01:26 2001
From: fdrake at cj42289-a.reston1.va.home.com (Fred Drake)
Date: Thu, 29 Mar 2001 08:01:26 -0500 (EST)
Subject: [Python-Dev] [development doc updates]
Message-ID: <20010329130126.C3EED2888E@cj42289-a.reston1.va.home.com>

The development version of the documentation has been updated:

	http://python.sourceforge.net/devel-docs/


For Peter Funk:  Removed space between function/method/class names and
their parameter lists for easier cut & paste.  This is a *tentative*
change; feedback is appreciated at python-docs at python.org.

Also added some new information on integrating with the cycle detector
and some additional C APIs introduced in Python 2.1 (PyObject_IsInstance(),
PyObject_IsSubclass()).




From dalke at acm.org  Fri Mar 30 01:07:17 2001
From: dalke at acm.org (Andrew Dalke)
Date: Thu, 29 Mar 2001 16:07:17 -0700
Subject: [Python-Dev] 'mapping' in weakrefs unneeded?
Message-ID: <015101c0b8a5$00c37ce0$d795fc9e@josiah>

Hello all,

  I'm starting to learn how to use weakrefs.  I'm curious
about the function named 'mapping'.  It is implemented as:

> def mapping(dict=None,weakkeys=0):
>     if weakkeys:
>         return WeakKeyDictionary(dict)
>     else:
>         return WeakValueDictionary(dict)

Why is this a useful function?  Shouldn't people just call
WeakKeyDictionary and WeakValueDictionary directly instead
of calling mapping with a parameter to specify which class
to construct?

If anything, this function is very confusing.  Take the
associated documentation as a case in point:

> mapping([dict[, weakkeys=0]]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The values from dict must be weakly referencable; if any
> values which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> If the weakkeys argument is not given or zero, the values in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> value exists anymore. 
>
> If the weakkeys argument is nonzero, the keys in the
> dictionary are weak, i.e. the entry in the dictionary is
> discarded when the last strong reference to the key is
> discarded. 

As far as I can tell, this documentation is wrong, or at
the very least confusing.  For example, it says:
> The values from dict must be weakly referencable

but when the weakkeys argument is nonzero,
> the keys in the dictionary are weak

So must both keys and values be weak?  Or only the keys?
I hope the latter since there are cases I can think of
where I want the keys to be weak and the values be types,
hence non-weakreferencable.

Wouldn't it be better to remove the 'mapping' function and
only have the WeakKeyDictionary and WeakValueDictionary.
In which case the documentation becomes:

> WeakValueDictionary([dict]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The values from dict must be weakly referencable; if any
> values which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> The values in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> value exists anymore. 

> WeakKeyDictionary([dict]) 
> Return a weak dictionary. If dict is given and not None,
> the new dictionary will contain the items contained in dict.
> The keys from dict must be weakly referencable; if any
> keys which would be inserted into the new mapping are not
> weakly referencable, TypeError will be raised and the new
> mapping will be empty.
>
> The keys in
> the dictionary are weak. That means the entries in the
> dictionary will be discarded when no strong reference to the
> key exists anymore. 

Easier to read and to see the parallels between the two
styles, IMHO of course.

I am not on this list though I will try to read the
archives online for the next couple of days.  Please
CC me about any resolution to this topic.

Sincerely,

                    Andrew
                    dalke at acm.org





From martin at loewis.home.cs.tu-berlin.de  Fri Mar 30 09:55:59 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 30 Mar 2001 09:55:59 +0200
Subject: [Python-Dev] Assigning to __debug__
Message-ID: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>

After the recent change that assignments to __debug__ are disallowed,
I noticed that IDLE stops working (see SF bug report), since it was
assigning to __debug__. 

Simply commenting-out the assignment (to zero) did no good: Inside the
__debug__ blocks, IDLE would try to perform print statements, which
would write to the re-assigned sys.stdout, which would invoke the code
that had the __debug__, which would give up thanks to infinite
recursion. So essentially, you either have to remove the __debug__
blocks, or rewrite them to writing to save_stdout - in which case all
the ColorDelegator debug message appear in the terminal window.

So anybody porting to Python 2.1 will essentially have to remove all
__debug__ blocks that were previously disabled by assigning 0 to
__debug__. I think this is undesirable.

As I recall, in the original description of __debug__, being able to
assign to it was reported as one of its main features, so that you
still had a run-time option (unless the interpreter was running with
-O, which eliminates the __debug__ blocks).

So in short, I think this change should be reverted.

Regards,
Martin

P.S. What was the motivation for that change, anyway?



From mal at lemburg.com  Fri Mar 30 10:06:42 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 10:06:42 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
Message-ID: <3AC43E92.C269D98D@lemburg.com>

"Martin v. Loewis" wrote:
> 
> After the recent change that assignments to __debug__ are disallowed,
> I noticed that IDLE stops working (see SF bug report), since it was
> assigning to __debug__.
> 
> Simply commenting-out the assignment (to zero) did no good: Inside the
> __debug__ blocks, IDLE would try to perform print statements, which
> would write to the re-assigned sys.stdout, which would invoke the code
> that had the __debug__, which would give up thanks to infinite
> recursion. So essentially, you either have to remove the __debug__
> blocks, or rewrite them to writing to save_stdout - in which case all
> the ColorDelegator debug message appear in the terminal window.
> 
> So anybody porting to Python 2.1 will essentially have to remove all
> __debug__ blocks that were previously disabled by assigning 0 to
> __debug__. I think this is undesirable.
> 
> As I recall, in the original description of __debug__, being able to
> assign to it was reported as one of its main features, so that you
> still had a run-time option (unless the interpreter was running with
> -O, which eliminates the __debug__ blocks).
> 
> So in short, I think this change should be reverted.

+1 from here... 

I use the same concept for debugging: during development I set 
__debug__ to 1, in production I change it to 0 (python -O does this
for me as well).

> Regards,
> Martin
> 
> P.S. What was the motivation for that change, anyway?
> 
> _______________________________________________
> Python-Dev mailing list
> Python-Dev at python.org
> http://mail.python.org/mailman/listinfo/python-dev

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at digicool.com  Fri Mar 30 15:30:18 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 08:30:18 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 09:55:59 +0200."
             <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> 
Message-ID: <200103301330.IAA23144@cj20424-a.reston1.va.home.com>

> After the recent change that assignments to __debug__ are disallowed,
> I noticed that IDLE stops working (see SF bug report), since it was
> assigning to __debug__. 

I checked in a fix to IDLE too, but it seems you were using an
externally-installed version of IDLE.

> Simply commenting-out the assignment (to zero) did no good: Inside the
> __debug__ blocks, IDLE would try to perform print statements, which
> would write to the re-assigned sys.stdout, which would invoke the code
> that had the __debug__, which would give up thanks to infinite
> recursion. So essentially, you either have to remove the __debug__
> blocks, or rewrite them to writing to save_stdout - in which case all
> the ColorDelegator debug message appear in the terminal window.

IDLE was totally abusing the __debug__ variable -- in the fix, I
simply changed all occurrences of __debug__ to DEBUG.

> So anybody porting to Python 2.1 will essentially have to remove all
> __debug__ blocks that were previously disabled by assigning 0 to
> __debug__. I think this is undesirable.

Assigning to __debug__ was never well-defined.  You used it at your
own risk.

> As I recall, in the original description of __debug__, being able to
> assign to it was reported as one of its main features, so that you
> still had a run-time option (unless the interpreter was running with
> -O, which eliminates the __debug__ blocks).

The manual has always used words that suggest that there is something
special about __debug__.  And there was: the compiler assumed it could
eliminate blocks started with "if __debug__:" when compiling in -O
mode.  Also, assert statements have always used LOAD_GLOBAL to
retrieve the __debug__ variable.

> So in short, I think this change should be reverted.

It's possible that it breaks more code, and it's possible that we end
up having to change the error into a warning for now.  But I insist
that assignment to __debug__ should become illegal.  You can *use* the
variable (to determine whether -O is on or not), but you can't *set*
it.

> Regards,
> Martin
> 
> P.S. What was the motivation for that change, anyway?

To enforce a restriction that was always intended: __debug__ should be
a read-only variable.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Fri Mar 30 15:42:59 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 15:42:59 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
Message-ID: <3AC48D63.A8AFA489@lemburg.com>

Guido van Rossum wrote:
> > ...
> > So anybody porting to Python 2.1 will essentially have to remove all
> > __debug__ blocks that were previously disabled by assigning 0 to
> > __debug__. I think this is undesirable.
> 
> Assigning to __debug__ was never well-defined.  You used it at your
> own risk.
> 
> > As I recall, in the original description of __debug__, being able to
> > assign to it was reported as one of its main features, so that you
> > still had a run-time option (unless the interpreter was running with
> > -O, which eliminates the __debug__ blocks).
> 
> The manual has always used words that suggest that there is something
> special about __debug__.  And there was: the compiler assumed it could
> eliminate blocks started with "if __debug__:" when compiling in -O
> mode.  Also, assert statements have always used LOAD_GLOBAL to
> retrieve the __debug__ variable.
> 
> > So in short, I think this change should be reverted.
> 
> It's possible that it breaks more code, and it's possible that we end
> up having to change the error into a warning for now.  But I insist
> that assignment to __debug__ should become illegal.  You can *use* the
> variable (to determine whether -O is on or not), but you can't *set*
> it.
> 
> > Regards,
> > Martin
> >
> > P.S. What was the motivation for that change, anyway?
> 
> To enforce a restriction that was always intended: __debug__ should be
> a read-only variable.

So you are suggesting that we change all our code to something like:

__enable_debug__ = 0 # set to 0 for production mode

...

if __debug__ and __enable_debug__:
   print 'debugging information'

...

I don't see the point in having to introduce a new variable
just to disable debugging code in Python code which does not
run under -O.

What does defining __debug__ as read-only variable buy us 
in the long term ?

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From guido at digicool.com  Fri Mar 30 16:02:35 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 09:02:35 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 15:42:59 +0200."
             <3AC48D63.A8AFA489@lemburg.com> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>  
            <3AC48D63.A8AFA489@lemburg.com> 
Message-ID: <200103301402.JAA23365@cj20424-a.reston1.va.home.com>

> So you are suggesting that we change all our code to something like:
> 
> __enable_debug__ = 0 # set to 0 for production mode
> 
> ...
> 
> if __debug__ and __enable_debug__:
>    print 'debugging information'
> 
> ...

I can't suggest anything, because I have no idea what semantics you
are assuming for __debug__ here, and I have no idea what you want with
that code.  Maybe you'll want to say "__debug__ = 1" even when you are
in -O mode -- that will definitely not work!

The form above won't (currently) be optimized out -- only "if
__debug__:" is optimized away, nothing more complicated (not even "if
(__debug__):".

In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
__UNDERSCORE__ CONVENTION!  Those names are reserved for the
interpreter, and you risk that they will be assigned a different
semantics in the future.

> I don't see the point in having to introduce a new variable
> just to disable debugging code in Python code which does not
> run under -O.
> 
> What does defining __debug__ as read-only variable buy us 
> in the long term ?

It allows the compiler to assume that __debug__ is a built-in name.
In the future, the __debug__ variable may become meaningless, as we
develop more differentiated optimization options.

The *only* acceptable use for __debug__ is to get rid of code that is
essentially an assertion but can't be spelled with just an assertion,
e.g.

def f(L):
    if __debug__:
        # Assert L is a list of integers:
        for item in L:
            assert isinstance(item, type(1))
    ...

--Guido van Rossum (home page: http://www.python.org/~guido/)



From fredrik at pythonware.com  Fri Mar 30 16:07:08 2001
From: fredrik at pythonware.com (Fredrik Lundh)
Date: Fri, 30 Mar 2001 16:07:08 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>             <3AC48D63.A8AFA489@lemburg.com>  <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <018001c0b922$b58b5d50$0900a8c0@SPIFF>

guido wrote:
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!

is the "__version__" convention documented somewhere?

Cheers /F




From moshez at zadka.site.co.il  Fri Mar 30 16:21:27 2001
From: moshez at zadka.site.co.il (Moshe Zadka)
Date: Fri, 30 Mar 2001 16:21:27 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <018001c0b922$b58b5d50$0900a8c0@SPIFF>
References: <018001c0b922$b58b5d50$0900a8c0@SPIFF>, <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>             <3AC48D63.A8AFA489@lemburg.com>  <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <E14izmJ-0006yR-00@darjeeling>

On Fri, 30 Mar 2001, "Fredrik Lundh" <fredrik at pythonware.com> wrote:
 
> is the "__version__" convention documented somewhere?

Yes. I don't remember where, but the words are something like "the __ names
are reserved for use by the infrastructure, loosly defined as the interpreter
and the standard library. Code which has aspirations to be part of the
infrastructure must use a unique prefix like __bobo_pos__"

-- 
"I'll be ex-DPL soon anyway so I'm        |LUKE: Is Perl better than Python?
looking for someplace else to grab power."|YODA: No...no... no. Quicker,
   -- Wichert Akkerman (on debian-private)|      easier, more seductive.
For public key, finger moshez at debian.org  |http://www.{python,debian,gnu}.org



From guido at digicool.com  Fri Mar 30 16:40:00 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 09:40:00 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 16:07:08 +0200."
             <018001c0b922$b58b5d50$0900a8c0@SPIFF> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>  
            <018001c0b922$b58b5d50$0900a8c0@SPIFF> 
Message-ID: <200103301440.JAA23550@cj20424-a.reston1.va.home.com>

> guido wrote:
> > In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> > __UNDERSCORE__ CONVENTION!
> 
> is the "__version__" convention documented somewhere?

This is a trick question, right?  :-)

__version__ may not be documented but is in de-facto use.  Folks
introducing other names (e.g. __author__, __credits__) should really
consider a PEP before grabbing a piece of the namespace.

--Guido van Rossum (home page: http://www.python.org/~guido/)



From mal at lemburg.com  Fri Mar 30 17:10:17 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 17:10:17 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>  
	            <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <3AC4A1D9.9D4C5BF7@lemburg.com>

Guido van Rossum wrote:
> 
> > So you are suggesting that we change all our code to something like:
> >
> > __enable_debug__ = 0 # set to 0 for production mode
> >
> > ...
> >
> > if __debug__ and __enable_debug__:
> >    print 'debugging information'
> >
> > ...
> 
> I can't suggest anything, because I have no idea what semantics you
> are assuming for __debug__ here, and I have no idea what you want with
> that code.  Maybe you'll want to say "__debug__ = 1" even when you are
> in -O mode -- that will definitely not work!

I know, but that's what I'm expecting. The point was to be able
to disable debugging code when running Python in non-optimized mode.
We'd have to change our code and use a new variable to work
around the SyntaxError exception.

While this is not so much of a problem for new code, existing code
will break (ie. not byte-compile anymore) in Python 2.1. 

A warning would be OK, but adding yet another SyntaxError for previously 
perfectly valid code is not going to make the Python users out there 
very happy... the current situation with two different settings
in common use out there (Python 1.5.2 and 2.0) is already a pain
to maintain due to the issues on Windows platforms (due to DLL 
problems).

I don't think that introducing even more subtle problems in 2.1
is going to be well accepted by Joe User.
 
> The form above won't (currently) be optimized out -- only "if
> __debug__:" is optimized away, nothing more complicated (not even "if
> (__debug__):".

Ok, make the code look like this then:

if __debug__:
   if enable_debug:
       print 'debug info'
 
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!  Those names are reserved for the
> interpreter, and you risk that they will be assigned a different
> semantics in the future.

Hey, this was just an example... ;-)

> > I don't see the point in having to introduce a new variable
> > just to disable debugging code in Python code which does not
> > run under -O.
> >
> > What does defining __debug__ as read-only variable buy us
> > in the long term ?
> 
> It allows the compiler to assume that __debug__ is a built-in name.
> In the future, the __debug__ variable may become meaningless, as we
> develop more differentiated optimization options.
> 
> The *only* acceptable use for __debug__ is to get rid of code that is
> essentially an assertion but can't be spelled with just an assertion,
> e.g.
> 
> def f(L):
>     if __debug__:
>         # Assert L is a list of integers:
>         for item in L:
>             assert isinstance(item, type(1))
>     ...

Maybe just me, but I use __debug__ a lot to do extra logging or 
printing in my code too; not just for assertions.

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From barry at digicool.com  Fri Mar 30 17:38:48 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Fri, 30 Mar 2001 10:38:48 -0500
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de>
	<200103301330.IAA23144@cj20424-a.reston1.va.home.com>
	<3AC48D63.A8AFA489@lemburg.com>
	<200103301402.JAA23365@cj20424-a.reston1.va.home.com>
Message-ID: <15044.43144.133911.800065@anthem.wooz.org>

>>>>> "GvR" == Guido van Rossum <guido at digicool.com> writes:

    GvR> The *only* acceptable use for __debug__ is to get rid of code
    GvR> that is essentially an assertion but can't be spelled with
    GvR> just an assertion, e.g.

Interestingly enough, last night Jim Fulton and I talked about a
situation where you might want asserts to survive running under -O,
because you want to take advantage of other optimizations, but you
still want to assert certain invariants in your code.

Of course, you can do this now by just not using the assert
statement.  So that's what we're doing, and for giggles we're multiply
inheriting the exception we raise from AssertionError and our own
exception.  What I think we'd prefer is a separate switch to control
optimization and the disabling of assert.

-Barry



From thomas.heller at ion-tof.com  Fri Mar 30 17:43:00 2001
From: thomas.heller at ion-tof.com (Thomas Heller)
Date: Fri, 30 Mar 2001 17:43:00 +0200
Subject: [Python-Dev] [Very Long 23kb] List of Patches to Go in 2.0.1
References: <E14iJKb-0000Kf-00@darjeeling>
Message-ID: <0a8201c0b930$19fc0750$e000a8c0@thomasnotebook>

IMO the fix to this bug should also go into 2.0.1:

Bug id 231064, sys.path not set correctly in embedded python interpreter

which is fixed in revision 1.23 of PC/getpathp.c


Thomas Heller




From thomas at xs4all.net  Fri Mar 30 17:48:28 2001
From: thomas at xs4all.net (Thomas Wouters)
Date: Fri, 30 Mar 2001 17:48:28 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <15044.43144.133911.800065@anthem.wooz.org>; from barry@digicool.com on Fri, Mar 30, 2001 at 10:38:48AM -0500
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com> <15044.43144.133911.800065@anthem.wooz.org>
Message-ID: <20010330174828.K13066@xs4all.nl>

On Fri, Mar 30, 2001 at 10:38:48AM -0500, Barry A. Warsaw wrote:

> What I think we'd prefer is a separate switch to control
> optimization and the disabling of assert.

You mean something like

#!/usr/bin/python -fno-asserts -fno_debug_ -fdocstrings -fdeadbranch 

Right!-ly y'rs,
-- 
Thomas Wouters <thomas at xs4all.net>

Hi! I'm a .signature virus! copy me into your .signature file to help me spread!



From Paul.Moore at uk.origin-it.com  Fri Mar 30 17:52:04 2001
From: Paul.Moore at uk.origin-it.com (Moore, Paul)
Date: Fri, 30 Mar 2001 16:52:04 +0100
Subject: [Python-Dev] PEP: Use site-packages on all platforms
Message-ID: <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com>

It was suggested that I post this to python-dev, as well as python-list and
the distutils SIG. I apologise if this is being done backwards? Should I get
a proper PEP number first, or is it appropriate to ask for initial comments
like this?

Paul

-----Original Message-----
From: Moore, Paul 
Sent: 30 March 2001 13:32
To: distutils-sig at python.org
Cc: 'python-list at python.org'
Subject: [Distutils] PEP: Use site-packages on all platforms


Attached is a first draft of a proposal to use the "site-packages" directory
for locally installed modules, on all platforms instead of just on Unix. If
the consensus is that this is a worthwhile proposal, I'll submit it as a
formal PEP.

Any advice or suggestions welcomed - I've never written a PEP before - I
hope I've got the procedure right...

Paul Moore

PEP: TBA
Title: Install local packages in site-packages on all platforms
Version $Revision$
Author: Paul Moore <gustav at morpheus.demon.co.uk>
Status: Draft
Type: Standards Track
Python-Version: 2.2
Created: 2001-03-30
Post-History: TBA

Abstract

    The standard Python distribution includes a directory Lib/site-packages,
    which is used on Unix platforms to hold locally-installed modules and
    packages. The site.py module distributed with Python includes support
for
    locating modules in this directory.

    This PEP proposes that the site-packages directory should be used
    uniformly across all platforms for locally installed modules.


Motivation

    On Windows platforms, the default setting for sys.path does not include
a
    directory suitable for users to install locally-developed modules. The
    "expected" location appears to be the directory containing the Python
    executable itself. Including locally developed code in the same
directory
    as installed executables is not good practice.

    Clearly, users can manipulate sys.path, either in a locally modified
    site.py, or in a suitable sitecustomize.py, or even via .pth files.
    However, there should be a standard location for such files, rather than
    relying on every individual site having to set their own policy.

    In addition, with distutils becoming more prevalent as a means of
    distributing modules, the need for a standard install location for
    distributed modules will become more common. It would be better to
define
    such a standard now, rather than later when more distutils-based
packages
    exist which will need rebuilding.

    It is relevant to note that prior to Python 2.1, the site-packages
    directory was not included in sys.path for Macintosh platforms. This has
    been changed in 2.1, and Macintosh includes sys.path now, leaving
Windows
    as the only major platform with no site-specific modules directory.


Implementation

    The implementation of this feature is fairly trivial. All that would be
    required is a change to site.py, to change the section setting sitedirs.
    The Python 2.1 version has

        if os.sep == '/':
            sitedirs = [makepath(prefix,
                                 "lib",
                                 "python" + sys.version[:3],
                                 "site-packages"),
                        makepath(prefix, "lib", "site-python")]
        elif os.sep == ':':
            sitedirs = [makepath(prefix, "lib", "site-packages")]
        else:
            sitedirs = [prefix]

    A suitable change would be to simply replace the last 4 lines with

        else:
            sitedirs = [makepath(prefix, "lib", "site-packages")]

    Changes would also be required to distutils, in the sysconfig.py file.
It
    is worth noting that this file does not seem to have been updated in
line
    with the change of policy on the Macintosh, as of this writing.

Notes

    1. It would be better if this change could be included in Python 2.1, as
       changing something of this nature is better done sooner, rather than
       later, to reduce the backward-compatibility burden. This is extremely
       unlikely to happen at this late stage in the release cycle, however.

    2. This change does not preclude packages using the current location -
       the change only adds a directory to sys.path, it does not remove
       anything.

    3. In the Windows distribution of Python 2.1 (beta 1), the
       Lib\site-packages directory has been removed. It would need to be
       reinstated.


Copyright

    This document has been placed in the public domain.

_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG at python.org
http://mail.python.org/mailman/listinfo/distutils-sig



From mal at lemburg.com  Fri Mar 30 18:09:26 2001
From: mal at lemburg.com (M.-A. Lemburg)
Date: Fri, 30 Mar 2001 18:09:26 +0200
Subject: [Python-Dev] Assigning to __debug__
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com> <15044.43144.133911.800065@anthem.wooz.org> <20010330174828.K13066@xs4all.nl>
Message-ID: <3AC4AFB6.23A17755@lemburg.com>

Thomas Wouters wrote:
> 
> On Fri, Mar 30, 2001 at 10:38:48AM -0500, Barry A. Warsaw wrote:
> 
> > What I think we'd prefer is a separate switch to control
> > optimization and the disabling of assert.
> 
> You mean something like
> 
> #!/usr/bin/python -fno-asserts -fno_debug_ -fdocstrings -fdeadbranch

Sounds like a good idea, but how do you tell the interpreter
which asserts to leave enabled and which to remove from the 
code ?

In general, I agree, though: a more fine grained control
over optimizations would be a Good Thing (even more since we
are talking about non-existing code analysis tools here ;-).

-- 
Marc-Andre Lemburg
______________________________________________________________________
Company & Consulting:                           http://www.egenix.com/
Python Pages:                           http://www.lemburg.com/python/



From paul at pfdubois.com  Fri Mar 30 19:01:39 2001
From: paul at pfdubois.com (Paul F. Dubois)
Date: Fri, 30 Mar 2001 09:01:39 -0800
Subject: [Python-Dev] Assigning to __debug__
Message-ID: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>

FWIW, this change broke a lot of my code and it took an hour or two to fix
it. I too was misled by the wording when __debug__ was introduced. I could
swear there were even examples of assigning to it, but maybe I'm dreaming.
Anyway, I thought I could.

Regardless of my delusions, this is another change that breaks code in the
middle of a beta cycle. I think that is not a good thing. It is one thing
when one goes to get a new beta or alpha; you expect to spend some time
then. It is another when one has been a good soldier and tried the beta and
is now using it for routine work and updating to a new version of it breaks
something because someone thought it ought to be broken. (If I don't use it
for my work I certainly won't find any problems with it). I realize that
this can't be a hard and fast rule but I think this one in particular
deserves warning status now and change in 2.2.




From barry at digicool.com  Fri Mar 30 19:16:28 2001
From: barry at digicool.com (Barry A. Warsaw)
Date: Fri, 30 Mar 2001 12:16:28 -0500
Subject: [Python-Dev] Assigning to __debug__
References: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com>
Message-ID: <15044.49004.757215.882179@anthem.wooz.org>

>>>>> "PFD" == Paul F Dubois <paul at pfdubois.com> writes:

    PFD> Regardless of my delusions, this is another change that
    PFD> breaks code in the middle of a beta cycle.

I agree with Paul.  It's too late in the beta cycle to break code, and
I /also/ dimly remember assignment to __debug__ being semi-blessed.

Let's make it a warning or revert the change.

-Barry



From guido at digicool.com  Fri Mar 30 19:19:31 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:19:31 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 10:38:48 EST."
             <15044.43144.133911.800065@anthem.wooz.org> 
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com> <3AC48D63.A8AFA489@lemburg.com> <200103301402.JAA23365@cj20424-a.reston1.va.home.com>  
            <15044.43144.133911.800065@anthem.wooz.org> 
Message-ID: <200103301719.MAA24153@cj20424-a.reston1.va.home.com>

>     GvR> The *only* acceptable use for __debug__ is to get rid of code
>     GvR> that is essentially an assertion but can't be spelled with
>     GvR> just an assertion, e.g.
> 
> Interestingly enough, last night Jim Fulton and I talked about a
> situation where you might want asserts to survive running under -O,
> because you want to take advantage of other optimizations, but you
> still want to assert certain invariants in your code.
> 
> Of course, you can do this now by just not using the assert
> statement.  So that's what we're doing, and for giggles we're multiply
> inheriting the exception we raise from AssertionError and our own
> exception.  What I think we'd prefer is a separate switch to control
> optimization and the disabling of assert.

That's one of the things I was alluding to when I talked about more
diversified control over optimizations.  I guess then the __debug__
variable would indicate whether or not assertions are turned on;
something else would let you query the compiler's optimization level.
But assigning to __debug__ still wouldn't do what you wanted (unless
we decided to *make* this the way to turn assertions on or off in a
module -- but since this is a compile-time thing, it would require
that the rhs of the assignment was a constant).

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar 30 19:37:37 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:37:37 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: Your message of "Fri, 30 Mar 2001 09:01:39 PST."
             <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com> 
References: <ADEOIFHFONCLEEPKCACCCEHKCHAA.paul@pfdubois.com> 
Message-ID: <200103301737.MAA24325@cj20424-a.reston1.va.home.com>

> FWIW, this change broke a lot of my code and it took an hour or two to fix
> it. I too was misled by the wording when __debug__ was introduced. I could
> swear there were even examples of assigning to it, but maybe I'm dreaming.
> Anyway, I thought I could.
> 
> Regardless of my delusions, this is another change that breaks code in the
> middle of a beta cycle. I think that is not a good thing. It is one thing
> when one goes to get a new beta or alpha; you expect to spend some time
> then. It is another when one has been a good soldier and tried the beta and
> is now using it for routine work and updating to a new version of it breaks
> something because someone thought it ought to be broken. (If I don't use it
> for my work I certainly won't find any problems with it). I realize that
> this can't be a hard and fast rule but I think this one in particular
> deserves warning status now and change in 2.2.

OK, this is the second confirmed report of broken 3rd party code, so
we'll change this into a warning.  Jeremy, that should be easy, right?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From guido at digicool.com  Fri Mar 30 19:41:41 2001
From: guido at digicool.com (Guido van Rossum)
Date: Fri, 30 Mar 2001 12:41:41 -0500
Subject: [Python-Dev] PEP: Use site-packages on all platforms
In-Reply-To: Your message of "Fri, 30 Mar 2001 16:52:04 +0100."
             <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com> 
References: <714DFA46B9BBD0119CD000805FC1F53B01B5ADD9@ukrux002.rundc.uk.origin-it.com> 
Message-ID: <200103301741.MAA24378@cj20424-a.reston1.va.home.com>

I think this is a good idea.  Submit the PEP to Barry!

I doubt that we can introduce this into Python 2.1 this late in the
release cycle.  Would that be a problem?

--Guido van Rossum (home page: http://www.python.org/~guido/)



From martin at loewis.home.cs.tu-berlin.de  Fri Mar 30 20:31:31 2001
From: martin at loewis.home.cs.tu-berlin.de (Martin v. Loewis)
Date: Fri, 30 Mar 2001 20:31:31 +0200
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <200103301330.IAA23144@cj20424-a.reston1.va.home.com> (message
	from Guido van Rossum on Fri, 30 Mar 2001 08:30:18 -0500)
References: <200103300755.f2U7txq07744@mira.informatik.hu-berlin.de> <200103301330.IAA23144@cj20424-a.reston1.va.home.com>
Message-ID: <200103301831.f2UIVVm01525@mira.informatik.hu-berlin.de>

> I checked in a fix to IDLE too, but it seems you were using an
> externally-installed version of IDLE.

Sorry about that, I used actually one from CVS: with a sticky 2.0 tag
:-(

> Assigning to __debug__ was never well-defined.  You used it at your
> own risk.

When __debug__ was first introduced, the NEWS entry read

# Without -O, the assert statement actually generates code that first
# checks __debug__; if this variable is false, the assertion is not
# checked.  __debug__ is a built-in variable whose value is
# initialized to track the -O flag (it's true iff -O is not
# specified).  With -O, no code is generated for assert statements,
# nor for code of the form ``if __debug__: <something>''.

So it clearly says that it is a variable, and that assert will check
its value at runtime. I can't quote any specific messages, but I
recall that you've explained it that way also in the public.

Regards,
Martin



From tim.one at home.com  Fri Mar 30 22:17:00 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 30 Mar 2001 15:17:00 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <018001c0b922$b58b5d50$0900a8c0@SPIFF>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFMJJAA.tim.one@home.com>

[Guido]
> In any case, YOU SHOULD NEVER INTRODUCE VARIABLES USING THE
> __UNDERSCORE__ CONVENTION!

[/F]
> is the "__version__" convention documented somewhere?

In the Language Reference manual, section "Reserved classes of identifiers",
middle line of the table.  It would benefit from more words, though (it just
says "System-defined name" now, and hostile users are known to have trouble
telling themselves apart from "the system" <wink>).




From tim.one at home.com  Fri Mar 30 22:30:53 2001
From: tim.one at home.com (Tim Peters)
Date: Fri, 30 Mar 2001 15:30:53 -0500
Subject: [Python-Dev] Assigning to __debug__
In-Reply-To: <200103301831.f2UIVVm01525@mira.informatik.hu-berlin.de>
Message-ID: <LNBBLJKPBEHFEDALKOLCMEFPJJAA.tim.one@home.com>

Take a trip down memory lane:

    http://groups.yahoo.com/group/python-list/message/19647

That's the c.l.py msg in which Guido first introduced the idea of __debug__
(and DAMN was searching life easier before DejaNews lost its memory!).

The debate immediately following that (cmdline arguments and all) is being
reinvented here now.

Nothing actually changed from Guido's first proposal (above), except that he
gave up his opposition to making "assert" a reserved word (for which
far-seeing flexibility I am still most grateful), and he actually implemented
the "PS here's a variant" flavor.

I wasn't able to find anything in that debate where Guido explicitly said you
couldn't bind __debug__ yourself, but neither could I find anything saying
you could, and I believe him when he says "no binding" was the *intent*
(that's most consistent with everything he said at the time).

those-who-don't-remember-the-past-are-doomed-to-read-me-nagging-them-
    about-it<wink>-ly y'rs  - tim




From clee at gnwy100.wuh.wustl.edu  Sat Mar 31 17:08:15 2001
From: clee at gnwy100.wuh.wustl.edu (Christopher Lee)
Date: Sat, 31 Mar 2001 09:08:15 -0600 (CST)
Subject: [Python-Dev] submitted patch to linuxaudiodev
Message-ID: <15045.62175.301007.35652@gnwy100.wuh.wustl.edu>

I'm a long-time listener/first-time caller and would like to know what I
should do to have my patch examined.  I've included a description of the
patch below.

Cheers,

-chris

-----------------------------------------------------------------------------
[reference: python-Patches #412553]

Problem:

test_linuxaudiodev.py  failed with "Resource temporarily busy message"
(under the cvs version of python)

Analysis:

The lad_write() method attempts to write continuously to /dev/dsp (or 
equivalent); when the audio buffer fills, write() returns an error code and
errorno is set to EAGAIN, indicating that the device buffer is full.  The
lad_write() interprets this as an error and instead of trying to write
again returns NULL.

Solution:

Use select() to check when the audio device becomes writable and test for
EAGAIN after doing a write().  I've submitted patch #412553 that implements
this solution. (use python21-lihnuxaudiodev.c-version2.diff).  With this
patch, test_linuxaudiodev.py passes.  This patch may also be relevant for
the python 2.0.1 bugfix release.


System configuration:

linux kernel 2.4.2 and 2.4.3 SMP on a dual processor i686 with the
soundblaster live value soundcard.